Header logo is ei


2008


no image
mGene: A Novel Discriminative Gene Finder

Schweikert, G., Zeller, G., Zien, A., Behr, J., Sonnenburg, S., Philips, P., Ong, C., Rätsch, G.

Worm Genomics and Systems Biology meeting, July 2008 (talk)

[BibTex]

2008

[BibTex]


no image
A Novel Protocol for Accuracy Assessment in Classification of Very High Resolution Multispectral and SAR Images

Bruzzone, L., Persello, C.

In pages: II-265-II-268 , IEEE, Piscataway, NJ, USA, IEEE International Geoscience and Remote Sensing Symposium (IGARSS), July 2008 (inproceedings)

Abstract
This paper presents a novel protocol for the accuracy assessment of thematic maps obtained by the classification of very high resolution images. As the thematic accuracy alone is not sufficient to adequately characterize the geometrical properties of classification maps, we propose a novel protocol that is based on the analysis of two families of indexes: (i) the traditional thematic accuracy indexes, and (ii) a set of geometric indexes that characterize different geometric properties of the objects recognized in the map. These indexes can be used in the training phase of a classifier for identifying the parameters values that optimize classification results on the basis of a multi-objective criterion. Experimental results obtained on Quickbird images show the effectiveness of the proposed protocol in selecting classification maps characterized by better tradeoff between thematic and geometric accuracy with respect to standard accuracy measures.

Web DOI [BibTex]

Web DOI [BibTex]


no image
A Decoupled Approach to Exemplar-based Unsupervised Learning

Nowozin, S., BakIr, G.

In ICML 2008, pages: 704-711, (Editors: Cohen, W. W., A. McCallum, S. Roweis), ACM Press, New York, NY, USA, 25th International Conference on Machine Learning, July 2008 (inproceedings)

Abstract
A recent trend in exemplar based unsupervised learning is to formulate the learning problem as a convex optimization problem. Convexity is achieved by restricting the set of possible prototypes to training exemplars. In particular, this has been done for clustering, vector quantization and mixture model density estimation. In this paper we propose a novel algorithm that is theoretically and practically superior to these convex formulations. This is possible by posing the unsupervised learning problem as a single convex master problem" with non-convex subproblems. We show that for the above learning tasks the subproblems are extremely wellbehaved and can be solved efficiently.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Discovering Common Sequence Variation in Arabidopsis thaliana

Rätsch, G., Clark, R., Schweikert, G., Toomajian, C., Ossowski, S., Zeller, G., Shinn, P., Warthman, N., Hu, T., Fu, G., Hinds, D., Cheng, H., Frazer, K., Huson, D., Schölkopf, B., Nordborg, M., Ecker, J., Weigel, D., Schneeberger, K., Bohlen, A.

16th Annual International Conference Intelligent Systems for Molecular Biology (ISMB), July 2008 (talk)

Web [BibTex]

Web [BibTex]


no image
A Novel Approach to the Selection of Robust and Invariant Features for Classification of Hyperspectral Images

Bruzzone, L., Persello, C.

In pages: I-66-I-69 , IEEE, Piscataway, NJ, USA, IEEE International Geoscience and Remote Sensing Symposium (IGARSS), July 2008 (inproceedings)

Abstract
This paper presents a novel approach to feature selection for the classification of hyperspectral images. The proposed approach aims at selecting a subset of the original set of features that exhibits two main properties:( i) high capability to discriminate among the considered classes, (ii) high invariance (stationarity) in the spatial domain of the investigated scene. The feature selection is accomplished by defining a multi-objective criterion that considers two terms: (i) a term that assesses the class separability, (ii) a term that evaluates the spatial invariance of the selected features. The multi-objective problem is solved by an evolutionary algorithm that estimates the Pareto-optimal solutions. Experiments carried out on a hyperspectral image acquired by the Hyperion sensor confirmed the effectiveness of the proposed technique.

Web DOI [BibTex]

Web DOI [BibTex]


no image
The skew spectrum of graphs

Kondor, R., Borgwardt, K.

In pages: 496-503, (Editors: Cohen, W.W. , A. McCallum, S.T. Roweis), ACM Press, New York, NY, USA, Twenty-Fifth International Conference on Machine Learning (ICML), July 2008 (inproceedings)

Abstract
The central issue in representing graph-structured data instances in learning algorithms is designing features which are invariant to permuting the numbering of the vertices. We present a new system of invariant graph features which we call the skew spectrum of graphs. The skew spectrum is based on mapping the adjacency matrix of any (weigted, directed, unlabeled) graph to a function on the symmetric group and computing bispectral invariants. The reduced form of the skew spectrum is computable in O(n3) time, and experiments show that on several benchmark datasets it can outperform state of the art graph kernels.

Web DOI [BibTex]

Web DOI [BibTex]


no image
Relating clustering stability to properties of cluster boundaries

Ben-David, S., von Luxburg, U.

In COLT 2008, pages: 379-390, (Editors: Servedio, R. A., T. Zhang), Omnipress, Madison, WI, USA, 21st Annual Conference on Learning Theory, July 2008 (inproceedings)

Abstract
In this paper, we investigate stability-based methods for cluster model selection, in particular to select the number K of clusters. The scenario under consideration is that clustering is performed by minimizing a certain clustering quality function, and that a unique global minimizer exists. On the one hand we show that stability can be upper bounded by certain properties of the optimal clustering, namely by the mass in a small tube around the cluster boundaries. On the other hand, we provide counterexamples which show that a reverse statement is not true in general. Finally, we give some examples and arguments why, from a theoretic point of view, using clustering stability in a high sample setting can be problematic. It can be seen that distribution-free guarantees bounding the difference between the finite sample stability and the “true stability” cannot exist, unless one makes strong assumptions on the underlying distribution.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Coding Theory in Brain-Computer Interfaces

Martens, SMM.

Soria Summerschool on Computational Mathematics "Algebraic Coding Theory" (S3CM), July 2008 (talk)

Web [BibTex]

Web [BibTex]


no image
Compressed Sensing and Bayesian Experimental Design

Seeger, M., Nickisch, H.

In ICML 2008, pages: 912-919, (Editors: Cohen, W. W., A. McCallum, S. Roweis), ACM Press, New York, NY, USA, 25th International Conference on Machine Learning, July 2008 (inproceedings)

Abstract
We relate compressed sensing (CS) with Bayesian experimental design and provide a novel efficient approximate method for the latter, based on expectation propagation. In a large comparative study about linearly measuring natural images, we show that the simple standard heuristic of measuring wavelet coefficients top-down systematically outperforms CS methods using random measurements; the sequential projection optimisation approach of (Ji & Carin, 2007) performs even worse. We also show that our own approximate Bayesian method is able to learn measurement filters on full images efficiently which ouperform the wavelet heuristic. To our knowledge, ours is the first successful attempt at "learning compressed sensing" for images of realistic size. In contrast to common CS methods, our framework is not restricted to sparse signals, but can readily be applied to other notions of signal complexity or noise models. We give concrete ideas how our method can be scaled up to large signal representations.

PDF PDF Web DOI [BibTex]

PDF PDF Web DOI [BibTex]


no image
Tailoring density estimation via reproducing kernel moment matching

Song, L., Zhang, X., Smola, A., Gretton, A., Schölkopf, B.

In Proceedings of the 25th International Conference onMachine Learning, pages: 992-999, (Editors: WW Cohen and A McCallum and S Roweis), ACM Press, New York, NY, USA, ICML, July 2008 (inproceedings)

Abstract
Moment matching is a popular means of parametric density estimation. We extend this technique to nonparametric estimation of mixture models. Our approach works by embedding distributions into a reproducing kernel Hilbert space, and performing moment matching in that space. This allows us to tailor density estimators to a function class of interest (i.e., for which we would like to compute expectations). We show our density estimation approach is useful in applications such as message compression in graphical models, and image classification and retrieval.

PDF PDF DOI [BibTex]

PDF PDF DOI [BibTex]


no image
Injective Hilbert Space Embeddings of Probability Measures

Sriperumbudur, B., Gretton, A., Fukumizu, K., Lanckriet, G., Schölkopf, B.

In Proceedings of the 21st Annual Conference on Learning Theory, pages: 111-122, (Editors: RA Servedio and T Zhang), Omnipress, Madison, WI, USA, 21st Annual Conference on Learning Theory (COLT), July 2008 (inproceedings)

Abstract
A Hilbert space embedding for probability measures has recently been proposed, with applications including dimensionality reduction, homogeneity testing and independence testing. This embedding represents any probability measure as a mean element in a reproducing kernel Hilbert space (RKHS). The embedding function has been proven to be injective when the reproducing kernel is universal. In this case, the embedding induces a metric on the space of probability distributions defined on compact metric spaces. In the present work, we consider more broadly the problem of specifying characteristic kernels, defined as kernels for which the RKHS embedding of probability measures is injective. In particular, characteristic kernels can include non-universal kernels. We restrict ourselves to translation-invariant kernels on Euclidean space, and define the associated metric on probability measures in terms of the Fourier spectrum of the kernel and characteristic functions of these measures. The support of the kernel spectrum is important in finding whether a kernel is characteristic: in particular, the embedding is injective if and only if the kernel spectrum has the entire domain as its support. Characteristic kernels may nonetheless have difficulty in distinguishing certain distributions on the basis of finite samples, again due to the interaction of the kernel spectrum and the characteristic functions of the measures.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Motor Skill Learning for Cognitive Robotics

Peters, J.

6th International Cognitive Robotics Workshop (CogRob), July 2008 (talk)

Abstract
Autonomous robots that can assist humans in situations of daily life have been a long standing vision of robotics, artificial intelligence, and cognitive sciences. A first step towards this goal is to create robots that can learn tasks triggered by environmental context or higher level instruction. However, learning techniques have yet to live up to this promise as only few methods manage to scale to high-dimensional manipulator or humanoid robots. In this tutorial, we give a general overview on motor skill learning for cognitive robotics using research at ATR, USC, CMU and Max-Planck in order to illustrate the problems in motor skill learning. For doing so, we discuss task-appropriate representations and algorithms for learning robot motor skills. Among the topics are the learning basic movements or motor primitives by imitation and reinforcement learning, learning rhytmic and discrete movements, fast regression methods for learning inverse dynamics and setups for learning task-space policies. Examples on various robots, e.g., SARCOS DB, the SARCOS Master Arm, BDI Little Dog and a Barrett WAM, are shown and include Ball-in-a-Cup, T-Ball, Juggling, Devil-Sticking, Operational Space Control and many others.

Web [BibTex]

Web [BibTex]


no image
A Hilbert-Schmidt Dependence Maximization Approach to Unsupervised Structure Discovery

Blaschko, M., Gretton, A.

In MLG 2008, pages: 1-3, 6th International Workshop on Mining and Learning with Graphs, July 2008 (inproceedings)

Abstract
In recent work by (Song et al., 2007), it has been proposed to perform clustering by maximizing a Hilbert-Schmidt independence criterion with respect to a predefined cluster structure Y , by solving for the partition matrix, II. We extend this approach here to the case where the cluster structure Y is not fixed, but is a quantity to be optimized; and we use an independence criterion which has been shown to be more sensitive at small sample sizes (the Hilbert-Schmidt Normalized Information Criterion, or HSNIC, Fukumizu et al., 2008). We demonstrate the use of this framework in two scenarios. In the first, we adopt a cluster structure selection approach in which the HSNIC is used to select a structure from several candidates. In the second, we consider the case where we discover structure by directly optimizing Y.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Painless Embeddings of Distributions: the Function Space View (Part 1)

Fukumizu, K., Gretton, A., Smola, A.

25th International Conference on Machine Learning (ICML), July 2008 (talk)

Abstract
This tutorial will give an introduction to the recent understanding and methodology of the kernel method: dealing with higher order statistics by embedding painlessly random variables/probability distributions. In the early days of kernel machines research, the "kernel trick" was considered a useful way of constructing nonlinear algorithms from linear ones. More recently, however, it has become clear that a potentially more far reaching use of kernels is as a linear way of dealing with higher order statistics by embedding distributions in a suitable reproducing kernel Hilbert space (RKHS). Notably, unlike the straightforward expansion of higher order moments or conventional characteristic function approach, the use of kernels or RKHS provides a painless, tractable way of embedding distributions. This line of reasoning leads naturally to the questions: what does it mean to embed a distribution in an RKHS? when is this embedding injective (and thus, when do different distributions have unique mappings)? what implications are there for learning algorithms that make use of these embeddings? This tutorial aims at answering these questions. There are a great variety of applications in machine learning and computer science, which require distribution estimation and/or comparison.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Reinforcement Learning for Robotics

Peters, J.

8th European Workshop on Reinforcement Learning for Robotics (EWRL), July 2008 (talk)

Web [BibTex]

Web [BibTex]


no image
Adaptive Importance Sampling with Automatic Model Selection in Value Function Approximation

Hachiya, H., Akiyama, T., Sugiyama, M., Peters, J.

In AAAI 2008, pages: 1351-1356, (Editors: Fox, D. , C. P. Gomes), AAAI Press, Menlo Park, CA, USA, Twenty-Third Conference on Artificial Intelligence, July 2008 (inproceedings)

Abstract
Off-policy reinforcement learning is aimed at efficiently reusing data samples gathered in the past, which is an essential problem for physically grounded AI as experiments are usually prohibitively expensive. A common approach is to use importance sampling techniques for compensating for the bias caused by the difference between data-sampling policies and the target policy. However, existing off-policy methods do not often take the variance of value function estimators explicitly into account and therefore their performance tends to be unstable. To cope with this problem, we propose using an adaptive importance sampling technique which allows us to actively control the trade-off between bias and variance. We further provide a method for optimally determining the trade-off parameter based on a variant of cross-validation. We demonstrate the usefulness of the proposed approach through simulations.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Sparse Multiscale Gaussian Process Regression

Walder, C., Kim, K., Schölkopf, B.

In Proceedings of the 25th International Conference on Machine Learning, pages: 1112-1119, (Editors: WW Cohen and A McCallum and S Roweis), ACM Press, New York, NY, USA, ICML, July 2008 (inproceedings)

Abstract
Most existing sparse Gaussian process (g.p.) models seek computational advantages by basing their computations on a set of m basis functions that are the covariance function of the g.p. with one of its two inputs fixed. We generalise this for the case of Gaussian covariance function, by basing our computations on m Gaussian basis functions with arbitrary diagonal covariance matrices (or length scales). For a fixed number of basis functions and any given criteria, this additional flexibility permits approximations no worse and typically better than was previously possible. We perform gradient based optimisation of the marginal likelihood, which costs O(m2n) time where n is the number of data points, and compare the method to various other sparse g.p. methods. Although we focus on g.p. regression, the central idea is applicable to all kernel based algorithms, and we also provide some results for the support vector machine (s.v.m.) and kernel ridge regression (k.r.r.). Our approach outperforms the other methods, particularly for the case of very few basis functions, i.e. a very high sparsity ratio.

PDF PDF DOI [BibTex]

PDF PDF DOI [BibTex]


no image
Example-Based Learning for Single-Image Super-Resolution

Kim, K., Kwon, Y.

In DAGM 2008, pages: 456-463, (Editors: Rigoll, G. ), Springer, Berlin, Germany, 30th Annual Symposium of the German Association for Pattern Recognition, June 2008 (inproceedings)

Abstract
This paper proposes a regression-based method for single-image super-resolution. Kernel ridge regression (KRR) is used to estimate the high-frequency details of the underlying high-resolution image. A sparse solution of KRR is found by combining the ideas of kernel matching pursuit and gradient descent, which allows time-complexity to be kept to a moderate level. To resolve the problem of ringing artifacts occurring due to the regularization effect, the regression results are post-processed using a prior model of a generic image class. Experimental results demonstrate the effectiveness of the proposed method.

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
A Multiple Kernel Learning Approach to Joint Multi-Class Object Detection

Lampert, C., Blaschko, M.

In DAGM 2008, pages: 31-40, (Editors: Rigoll, G. ), Springer, Berlin, Germany, 30th Annual Symposium of the German Association for Pattern Recognition, June 2008, Main Award DAGM 2008 (inproceedings)

Abstract
Most current methods for multi-class object classification and localization work as independent 1-vs-rest classifiers. They decide whether and where an object is visible in an image purely on a per-class basis. Joint learning of more than one object class would generally be preferable, since this would allow the use of contextual information such as co-occurrence between classes. However, this approach is usually not employed because of its computational cost. In this paper we propose a method to combine the efficiency of single class localization with a subsequent decision process that works jointly for all given object classes. By following a multiple kernel learning (MKL) approach, we automatically obtain a sparse dependency graph of relevant object classes on which to base the decision. Experiments on the PASCAL VOC 2006 and 2007 datasets show that the subsequent joint decision step clearly improves the accuracy compared to single class detection.

PDF ZIP Web DOI [BibTex]

PDF ZIP Web DOI [BibTex]


no image
Natural Evolution Strategies

Wierstra, D., Schaul, T., Peters, J., Schmidhuber, J.

In CEC 2008, pages: 3381-3387, IEEE, Piscataway, NJ, USA, IEEE Congress on Evolutionary Computation, June 2008 (inproceedings)

Abstract
This paper presents natural evolution strategies (NES), a novel algorithm for performing real-valued dasiablack boxpsila function optimization: optimizing an unknown objective function where algorithm-selected function measurements constitute the only information accessible to the method. Natural evolution strategies search the fitness landscape using a multivariate normal distribution with a self-adapting mutation matrix to generate correlated mutations in promising regions. NES shares this property with covariance matrix adaption (CMA), an evolution strategy (ES) which has been shown to perform well on a variety of high-precision optimization tasks. The natural evolution strategies algorithm, however, is simpler, less ad-hoc and more principled. Self-adaptation of the mutation matrix is derived using a Monte Carlo estimate of the natural gradient towards better expected fitness. By following the natural gradient instead of the dasiavanillapsila gradient, we can ensure efficient update steps while preventing early convergence due to overly greedy updates, resulting in reduced sensitivity to local suboptima. We show NES has competitive performance with CMA on unimodal tasks, while outperforming it on several multimodal tasks that are rich in deceptive local optima.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Multi-Classification by Categorical Features via Clustering

Seldin, Y.

25th International Conference on Machine Learning (ICML), June 2008 (talk)

Abstract
We derive a generalization bound for multi-classification schemes based on grid clustering in categorical parameter product spaces. Grid clustering partitions the parameter space in the form of a Cartesian product of partitions for each of the parameters. The derived bound provides a means to evaluate clustering solutions in terms of the generalization power of a built-on classifier. For classification based on a single feature the bound serves to find a globally optimal classification rule. Comparison of the generalization power of individual features can then be used for feature ranking. Our experiments show that in this role the bound is much more precise than mutual information or normalized correlation indices.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Partitioning of Image Datasets using Discriminative Context Information

Lampert, CH.

In CVPR 2008, pages: 1-8, IEEE Computer Society, Los Alamitos, CA, USA, IEEE Computer Society Conference on Computer Vision and Pattern Recognition, June 2008 (inproceedings)

Abstract
We propose a new method to partition an unlabeled dataset, called Discriminative Context Partitioning (DCP). It is motivated by the idea of splitting the dataset based only on how well the resulting parts can be separated from a context class of disjoint data points. This is in contrast to typical clustering techniques like K-means that are based on a generative model by implicitly or explicitly searching for modes in the distribution of samples. The discriminative criterion in DCP avoids the problems that density based methods have when the a priori assumption of multimodality is violated, when the number of samples becomes small in relation to the dimensionality of the feature space, or if the cluster sizes are strongly unbalanced. We formulate DCP‘s separation property as a large-margin criterion, and show how the resulting optimization problem can be solved efficiently. Experiments on the MNIST and USPS datasets of handwritten digits and on a subset of the Caltech256 dataset show that, given a suitable context, DCP can achieve good results even in situation where density-based clustering techniques fail.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Bayesian online multi-task learning using regularization networks

Pillonetto, G., Dinuzzo, F., De Nicolao, G.

In pages: 4517-4522, IEEE Service Center, Piscataway, NJ, USA, 2008 American Control Conference (ACC), June 2008 (inproceedings)

Abstract
Recently, standard single-task kernel methods have been extended to the case of multi-task learning under the framework of regularization. Experimental results have shown that such an approach can perform much better than single-task techniques, especially when few examples per task are available. However, a possible drawback may be computational complexity. For instance, when using regularization networks, complexity scales as the cube of the overall number of data associated with all the tasks. In this paper, an efficient computational scheme is derived for a widely applied class of multi-task kernels. More precisely, a quadratic loss is assumed and the multi-task kernel is the sum of a common term and a task-specific one. The proposed algorithm performs online learning recursively updating the estimates as new data become available. The learning problem is formulated in a Bayesian setting. The optimal estimates are obtained by solving a sequence of subproblems which involve projection of random variables onto suitable subspaces. The algorithm is tested on a simulated data set.

Web DOI [BibTex]

Web DOI [BibTex]


no image
Correlational Spectral Clustering

Blaschko, MB., Lampert, CH.

In CVPR 2008, pages: 1-8, IEEE Computer Society, Los Alamitos, CA, USA, IEEE Computer Society Conference on Computer Vision and Pattern Recognition, June 2008 (inproceedings)

Abstract
We present a new method for spectral clustering with paired data based on kernel canonical correlation analysis, called correlational spectral clustering. Paired data are common in real world data sources, such as images with text captions. Traditional spectral clustering algorithms either assume that data can be represented by a single similarity measure, or by co-occurrence matrices that are then used in biclustering. In contrast, the proposed method uses separate similarity measures for each data representation, and allows for projection of previously unseen data that are only observed in one representation (e.g. images but not text). We show that this algorithm generalizes traditional spectral clustering algorithms and show consistent empirical improvement over spectral clustering on a variety of datasets of images with associated text.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Approximate Dynamic Programming with Gaussian Processes

Deisenroth, M., Peters, J., Rasmussen, C.

In ACC 2008, pages: 4480-4485, IEEE Service Center, Piscataway, NJ, USA, 2008 American Control Conference, June 2008 (inproceedings)

Abstract
In general, it is difficult to determine an optimal closed-loop policy in nonlinear control problems with continuous-valued state and control domains. Hence, approximations are often inevitable. The standard method of discretizing states and controls suffers from the curse of dimensionality and strongly depends on the chosen temporal sampling rate. In this paper, we introduce Gaussian process dynamic programming (GPDP) and determine an approximate globally optimal closed-loop policy. In GPDP, value functions in the Bellman recursion of the dynamic programming algorithm are modeled using Gaussian processes. GPDP returns an optimal statefeedback for a finite set of states. Based on these outcomes, we learn a possibly discontinuous closed-loop policy on the entire state space by switching between two independently trained Gaussian processes. A binary classifier selects one Gaussian process to predict the optimal control signal. We show that GPDP is able to yield an almost optimal solution to an LQ problem using few sample points. Moreover, we successfully apply GPDP to the underpowered pendulum swing up, a complex nonlinear control problem.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Beyond Sliding Windows: Object Localization by Efficient Subwindow Search

Lampert, C., Blaschko, M., Hofmann, T.

In CVPR 2008, pages: 1-8, IEEE Computer Society, Los Alamitos, CA, USA, IEEE Computer Society Conference on Computer Vision and Pattern Recognition, June 2008, Best paper award (inproceedings)

Abstract
Most successful object recognition systems rely on binary classification, deciding only if an object is present or not, but not providing information on the actual object location. To perform localization, one can take a sliding window approach, but this strongly increases the computational cost, because the classifier function has to be evaluated over a large set of candidate subwindows. In this paper, we propose a simple yet powerful branchand- bound scheme that allows efficient maximization of a large class of classifier functions over all possible subimages. It converges to a globally optimal solution typically in sublinear time. We show how our method is applicable to different object detection and retrieval scenarios. The achieved speedup allows the use of classifiers for localization that formerly were considered too slow for this task, such as SVMs with a spatial pyramid kernel or nearest neighbor classifiers based on the 2-distance. We demonstrate state-of-the-art performance of the resulting systems on the UIUC Cars dataset, the PASCAL VOC 2006 dataset and in the PASCAL VOC 2007 competition.

PDF PDF Web DOI [BibTex]

PDF PDF Web DOI [BibTex]


no image
Computed Torque Control with Nonparametric Regression Models

Nguyen-Tuong, D., Seeger, M., Peters, J.

In ACC 2008, pages: 212-217, IEEE Service Center, Piscataway, NJ, USA, 2008 American Control Conference, June 2008 (inproceedings)

Abstract
Computed torque control allows the design of considerably more precise, energy-efficient and compliant controls for robots. However, the major obstacle is the requirement of an accurate model for torque generation, which cannot be obtained in some cases using rigid-body formulations due to unmodeled nonlinearities, such as complex friction or actuator dynamics. In such cases, models approximated from robot data present an appealing alternative. In this paper, we compare two nonparametric regression methods for model approximation, i.e., locally weighted projection regression (LWPR) and Gaussian process regression (GPR). While locally weighted regression was employed for real-time model estimation in learning adaptive control, Gaussian process regression has not been used in control to-date due to high computational requirements. The comparison includes the assessment of model approximation for both regression methods using data originated from SARCOS robot arm, as well as an evaluation of the robot tracking p erformance in computed torque control employing the approximated models. Our results show that GPR can be applied for real-time control achieving higher accuracy. However, for the online learning LWPR is superior by reason of lower computational requirements.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Multi-Classification by Categorical Features via Clustering

Seldin, Y., Tishby, N.

In In the proceedings of the 25th International Conference on Machine Learning (ICML 2008), pages: 920-927, 25th International Conference on Machine Learning (ICML), June 2008 (inproceedings)

Abstract
We derive a generalization bound for multi-classification schemes based on grid clustering in categorical parameter product spaces. Grid clustering partitions the parameter space in the form of a Cartesian product of partitions for each of the parameters. The derived bound provides a means to evaluate clustering solutions in terms of the generalization power of a built-on classifier. For classification based on a single feature the bound serves to find a globally optimal classification rule. Comparison of the generalization power of individual features can then be used for feature ranking. Our experiments show that in this role the bound is much more precise than mutual information or normalized correlation indices.

PDF Web [BibTex]

PDF Web [BibTex]


no image
A Kernel Test of Nonlinear Granger Causality

Sun, X.

In Proceedings of the Workshop on Inference and Estimation in Probabilistic Time-Series Models, pages: 79-89, (Editors: Barber, D. , A. T. Cemgil, S. Chiappa), Isaac Newton Institute for Mathematical Sciences, Cambridge, United Kingdom, Workshop on Inference and Estimation in Probabilistic Time-Series Models, June 2008 (inproceedings)

Abstract
We present a novel test of nonlinear Granger causality in bivariate time series. The trace norm of conditional covariance operators is used to capture the prediction errors. Based on this measure, a subsampling-based multiple testing procedure tests the prediction improvement of one time series by the other one. The distributional properties of the resulting p-values reveal the direction of Granger causality. Encouraging results of experiments with simulated and real-world data support our approach.

PDF [BibTex]

PDF [BibTex]


no image
Thin-Plate Splines Between Riemannian Manifolds

Steinke, F., Hein, M., Schölkopf, B.

Workshop on Geometry and Statistics of Shapes, June 2008 (talk)

Abstract
With the help of differential geometry we describe a framework to define a thin-plate spline like energy for maps between arbitrary Riemannian manifolds. The so-called Eells energy only depends on the intrinsic geometry of the input and output manifold, but not on their respective representation. The energy can then be used for regression between manifolds, we present results for cases where the outputs are rotations, sets of angles, or points on 3D surfaces. In the future we plan to also target regression where the output is an element of "shape space", understood as a Riemannian manifold. One could also further explore the meaning of the Eells energy when applied to diffeomorphisms between shapes, especially with regard to its potential use as a distance measure between shapes that does not depend on the embedding or the parametrisation of the shapes.

Web [BibTex]

Web [BibTex]


Thumb xl teaser
Bayesian Color Constancy Revisited

Gehler, P., Rother, C., Blake, A., Minka, T., Sharp, T.

In IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR, June 2008, http://dx.doi.org/10.1109/CVPR.2008.4587765 (inproceedings)

website+code+data pdf [BibTex]

website+code+data pdf [BibTex]


no image
Real-time Learning of Resolved Velocity Control on a Mitsubishi PA-10

Peters, J., Nguyen-Tuong, D.

In ICRA 2008, pages: 2872-2877, IEEE Service Center, Piscataway, NJ, USA, 2008 IEEE International Conference on Robotics and Automation, May 2008 (inproceedings)

Abstract
Learning inverse kinematics has long been fascinating the robot learning community. While humans acquire this transformation to complicated tool spaces with ease, it is not a straightforward application for supervised learning algorithms due to non-convex learning problem. However, the key insight that the problem can be considered convex in small local regions allows the application of locally linear learning methods. Nevertheless, the local solution of the problem depends on the data distribution which can result into inconsistent global solutions with large model discontinuities. While this problem can be treated in various ways in offline learning, it poses a serious problem for online learning. Previous approaches to the real-time learning of inverse kinematics avoid this problem using smart data generation, such as the learner biasses its own solution. Such biassed solutions can result into premature convergence, and from the resulting solution it is often hard to understand what has been learned in tha t local region. This paper improves and solves this problem by presenting a learning algorithm which can deal with this inconsistency through re-weighting the data online. Furthermore, we show that our algorithms work not only in simulation, but we present real-time learning results on a physical Mitsubishi PA-10 robot arm.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
State Space Compression with Predictive Representations

Boularias, A., Izadi, M., Chaib-Draa, B.

In Flairs 2008, pages: 41-46, (Editors: Wilson, D. C., H. C. Lane), AAAI Press, Menlo Park, CA, USA, 21st International Florida Artificial Intelligence Research Society Conference, May 2008 (inproceedings)

Abstract
Current studies have demonstrated that the representational power of predictive state representations (PSRs) is at least equal to the one of partially observable Markov decision processes (POMDPs). This is while early steps in planning and generalization with PSRs suggest substantial improvements compared to POMDPs. However, lack of practical algorithms for learning these representations severely restricts their applicability. The computational inefficiency of exact PSR learning methods naturally leads to the exploration of various approximation methods that can provide a good set of core tests through less computational effort. In this paper, we address this problem in an optimization framework. In particular, our approach aims to minimize the potential error that may be caused by missing a number of core tests. We provide analysis of the error caused by this compression and present an empirical evaluation illustrating the performance of this approach.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Learning resolved velocity control

Peters, J.

2008 IEEE International Conference on Robotics and Automation (ICRA), May 2008 (talk)

Web [BibTex]

Web [BibTex]


no image
Bayesian methods for protein structure determination

Habeck, M.

Machine Learning in Structural Bioinformatics, April 2008 (talk)

Web [BibTex]

Web [BibTex]


no image
Graph Mining with Variational Dirichlet Process Mixture Models

Tsuda, K., Kurihara, K.

In SDM 2008, pages: 432-442, (Editors: Zaki, M. J.), Society for Industrial and Applied Mathematics, Philadelphia, PA, USA, 8th SIAM International Conference on Data Mining, April 2008 (inproceedings)

Abstract
Graph data such as chemical compounds and XML documents are getting more common in many application domains. A main difficulty of graph data processing lies in the intrinsic high dimensionality of graphs, namely, when a graph is represented as a binary feature vector of indicators of all possible subgraph patterns, the dimensionality gets too large for usual statistical methods. We propose a nonparametric Bayesian method for clustering graphs and selecting salient patterns at the same time. Variational inference is adopted here, because sampling is not applicable due to extremely high dimensionality. The feature set minimizing the free energy is efficiently collected with the DFS code tree, where the generation of useless subgraphs is suppressed by a tree pruning condition. In experiments, our method is compared with a simpler approach based on frequent subgraph mining, and graph kernels.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Model-Based Reinforcement Learning with Continuous States and Actions

Deisenroth, M., Rasmussen, C., Peters, J.

In ESANN 2008, pages: 19-24, (Editors: Verleysen, M. ), d-side, Evere, Belgium, European Symposium on Artificial Neural Networks, April 2008 (inproceedings)

Abstract
Finding an optimal policy in a reinforcement learning (RL) framework with continuous state and action spaces is challenging. Approximate solutions are often inevitable. GPDP is an approximate dynamic programming algorithm based on Gaussian process (GP) models for the value functions. In this paper, we extend GPDP to the case of unknown transition dynamics. After building a GP model for the transition dynamics, we apply GPDP to this model and determine a continuous-valued policy in the entire state space. We apply the resulting controller to the underpowered pendulum swing up. Moreover, we compare our results on this RL task to a nearly optimal discrete DP solution in a fully known environment.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Learning Inverse Dynamics: A Comparison

Nguyen-Tuong, D., Peters, J., Seeger, M., Schölkopf, B.

In Advances in Computational Intelligence and Learning: Proceedings of the European Symposium on Artificial Neural Networks, pages: 13-18, (Editors: M Verleysen), d-side, Evere, Belgium, 16th European Symposium on Artificial Neural Networks (ESANN), April 2008 (inproceedings)

Abstract
While it is well-known that model can enhance the control performance in terms of precision or energy efficiency, the practical application has often been limited by the complexities of manually obtaining sufficiently accurate models. In the past, learning has proven a viable alternative to using a combination of rigid-body dynamics and handcrafted approximations of nonlinearities. However, a major open question is what nonparametric learning method is suited best for learning dynamics? Traditionally, locally weighted projection regression (LWPR), has been the standard method as it is capable of online, real-time learning for very complex robots. However, while LWPR has had significant impact on learning in robotics, alternative nonparametric regression methods such as support vector regression (SVR) and Gaussian processes regression (GPR) offer interesting alternatives with fewer open parameters and potentially higher accuracy. In this paper, we evaluate these three alternatives for model learning. Our comparison consists out of the evaluation of learning quality for each regression method using original data from SARCOS robot arm, as well as the robot tracking performance employing learned models. The results show that GPR and SVR achieve a superior learning precision and can be applied for real-time control obtaining higher accuracy. However, for the online learning LWPR presents the better method due to its lower computational requirements.

PDF Web [BibTex]

PDF Web [BibTex]

2007


no image
Reaction graph kernels for discovering missing enzymes in the plant secondary metabolism

Saigo, H., Hattori, M., Tsuda, K.

NIPS Workshop on Machine Learning in Computational Biology, December 2007 (talk)

Abstract
Secondary metabolic pathway in plant is important for finding druggable candidate enzymes. However, there are many enzymes whose functions are still undiscovered especially in organism-specific metabolic pathways. We propose reaction graph kernels for automatically assigning the EC numbers to unknown enzymatic reactions in a metabolic network. Experiments are carried out on KEGG/REACTION database and our method successfully predicted the first three digits of the EC number with 83% accuracy.We also exhaustively predicted missing enzymatic functions in the plant secondary metabolism pathways, and evaluated our results in biochemical validity.

Web [BibTex]

2007

Web [BibTex]


no image
Positional Oligomer Importance Matrices

Sonnenburg, S., Zien, A., Philips, P., Rätsch, G.

NIPS Workshop on Machine Learning in Computational Biology, December 2007 (talk)

Abstract
At the heart of many important bioinformatics problems, such as gene finding and function prediction, is the classification of biological sequences, above all of DNA and proteins. In many cases, the most accurate classifiers are obtained by training SVMs with complex sequence kernels, for instance for transcription starts or splice sites. However, an often criticized downside of SVMs with complex kernels is that it is very hard for humans to understand the learned decision rules and to derive biological insights from them. To close this gap, we introduce the concept of positional oligomer importance matrices (POIMs) and develop an efficient algorithm for their computation. We demonstrate how they overcome the limitations of sequence logos, and how they can be used to find relevant motifs for different biological phenomena in a straight-forward way. Note that the concept of POIMs is not limited to interpreting SVMs, but is applicable to general k−mer based scoring systems.

Web [BibTex]

Web [BibTex]


no image
Machine Learning Algorithms for Polymorphism Detection

Schweikert, G., Zeller, G., Weigel, D., Schölkopf, B., Rätsch, G.

NIPS Workshop on Machine Learning in Computational Biology, December 2007 (talk)

Web [BibTex]

Web [BibTex]


no image
An Automated Combination of Kernels for Predicting Protein Subcellular Localization

Zien, A., Ong, C.

NIPS Workshop on Machine Learning in Computational Biology, December 2007 (talk)

Abstract
Protein subcellular localization is a crucial ingredient to many important inferences about cellular processes, including prediction of protein function and protein interactions.We propose a new class of protein sequence kernels which considers all motifs including motifs with gaps. This class of kernels allows the inclusion of pairwise amino acid distances into their computation. We utilize an extension of the multiclass support vector machine (SVM)method which directly solves protein subcellular localization without resorting to the common approach of splitting the problem into several binary classification problems. To automatically search over families of possible amino acid motifs, we optimize over multiple kernels at the same time. We compare our automated approach to four other predictors on three different datasets, and show that we perform better than the current state of the art. Furthermore, our method provides some insights as to which features are most useful for determining subcellular localization, which are in agreement with biological reasoning.

Web [BibTex]

Web [BibTex]


no image
Challenges in Brain-Computer Interface Development: Induction, Measurement, Decoding, Integration

Hill, NJ.

Invited keynote talk at the launch of BrainGain, the Dutch BCI research consortium, November 2007 (talk)

Abstract
I‘ll present a perspective on Brain-Computer Interface development from T{\"u}bingen. Some of the benefits promised by BCI technology lie in the near foreseeable future, and some further away. Our motivation is to make BCI technology feasible for the people who could benefit from what it has to offer soon: namely, people in the "completely locked-in" state. I‘ll mention some of the challenges of working with this user group, and explain the specific directions they have motivated us to take in developing experimental methods, algorithms, and software.

[BibTex]

[BibTex]


no image
Towards compliant humanoids: an experimental assessment of suitable task space position/orientation controllers

Nakanishi, J., Mistry, M., Peters, J., Schaal, S.

In IROS 2007, 2007, pages: 2520-2527, (Editors: Grant, E. , T. C. Henderson), IEEE Service Center, Piscataway, NJ, USA, IEEE/RSJ International Conference on Intelligent Robots and Systems, November 2007 (inproceedings)

Abstract
Compliant control will be a prerequisite for humanoid robotics if these robots are supposed to work safely and robustly in human and/or dynamic environments. One view of compliant control is that a robot should control a minimal number of degrees-of-freedom (DOFs) directly, i.e., those relevant DOFs for the task, and keep the remaining DOFs maximally compliant, usually in the null space of the task. This view naturally leads to task space control. However, surprisingly few implementations of task space control can be found in actual humanoid robots. This paper makes a first step towards assessing the usefulness of task space controllers for humanoids by investigating which choices of controllers are available and what inherent control characteristics they have—this treatment will concern position and orientation control, where the latter is based on a quaternion formulation. Empirical evaluations on an anthropomorphic Sarcos master arm illustrate the robustness of the different controllers as well as the eas e of implementing and tuning them. Our extensive empirical results demonstrate that simpler task space controllers, e.g., classical resolved motion rate control or resolved acceleration control can be quite advantageous in face of inevitable modeling errors in model-based control, and that well chosen formulations are easy to implement and quite robust, such that they are useful for humanoids.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Sistema avanzato per la classificazione delle aree agricole in immagini ad elevata risoluzione geometrica: applicazione al territorio del Trentino

Arnoldi, E., Bruzzone, L., Carlin, L., Pedron, L., Persello, C.

In pages: 1-6, 11. Conferenza Nazionale ASITA, November 2007 (inproceedings)

PDF Web [BibTex]

PDF Web [BibTex]


no image
Performance Stabilization and Improvement in Graph-based Semi-supervised Learning with Ensemble Method and Graph Sharpening

Choi, I., Shin, H.

In Korean Data Mining Society Conference, pages: 257-262, Korean Data Mining Society, Seoul, Korea, Korean Data Mining Society Conference, November 2007 (inproceedings)

PDF [BibTex]

PDF [BibTex]


no image
Policy Learning for Robotics

Peters, J.

14th International Conference on Neural Information Processing (ICONIP), November 2007 (talk)

Web [BibTex]

Web [BibTex]


no image
Hilbert Space Representations of Probability Distributions

Gretton, A.

2nd Workshop on Machine Learning and Optimization at the ISM, October 2007 (talk)

Abstract
Many problems in unsupervised learning require the analysis of features of probability distributions. At the most fundamental level, we might wish to determine whether two distributions are the same, based on samples from each - this is known as the two-sample or homogeneity problem. We use kernel methods to address this problem, by mapping probability distributions to elements in a reproducing kernel Hilbert space (RKHS). Given a sufficiently rich RKHS, these representations are unique: thus comparing feature space representations allows us to compare distributions without ambiguity. Applications include testing whether cancer subtypes are distinguishable on the basis of DNA microarray data, and whether low frequency oscillations measured at an electrode in the cortex have a different distribution during a neural spike. A more difficult problem is to discover whether two random variables drawn from a joint distribution are independent. It turns out that any dependence between pairs of random variables can be encoded in a cross-covariance operator between appropriate RKHS representations of the variables, and we may test independence by looking at a norm of the operator. We demonstrate this independence test by establishing dependence between an English text and its French translation, as opposed to French text on the same topic but otherwise unrelated. Finally, we show that this operator norm is itself a difference in feature means.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Discriminative Subsequence Mining for Action Classification

Nowozin, S., BakIr, G., Tsuda, K.

In ICCV 2007, pages: 1919-1923, IEEE Computer Society, Los Alamitos, CA, USA, 11th IEEE International Conference on Computer Vision, October 2007 (inproceedings)

Abstract
Recent approaches to action classification in videos have used sparse spatio-temporal words encoding local appearance around interesting movements. Most of these approaches use a histogram representation, discarding the temporal order among features. But this ordering information can contain important information about the action itself, e.g. consider the sport disciplines of hurdle race and long jump, where the global temporal order of motions (running, jumping) is important to discriminate between the two. In this work we propose to use a sequential representation which retains this temporal order. Further, we introduce Discriminative Subsequence Mining to find optimal discriminative subsequence patterns. In combination with the LPBoost classifier, this amounts to simultaneously learning a classification function and performing feature selection in the space of all possible feature sequences. The resulting classifier linearly combines a small number of interpretable decision functions, each checking for the presence of a single discriminative pattern. The classifier is benchmarked on the KTH action classification data set and outperforms the best known results in the literature.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Regression with Intervals

Kashima, H., Yamazaki, K., Saigo, H., Inokuchi, A.

International Workshop on Data-Mining and Statistical Science (DMSS2007), October 2007, JSAI Incentive Award. Talk was given by Hisashi Kashima. (talk)

Web [BibTex]

Web [BibTex]