Header logo is ei


2009


no image
Learning an Interactive Segmentation System

Nickisch, H., Kohli, P., Rother, C.

Max Planck Institute for Biological Cybernetics, December 2009 (techreport)

Abstract
Many successful applications of computer vision to image or video manipulation are interactive by nature. However, parameters of such systems are often trained neglecting the user. Traditionally, interactive systems have been treated in the same manner as their fully automatic counterparts. Their performance is evaluated by computing the accuracy of their solutions under some fixed set of user interactions. This paper proposes a new evaluation and learning method which brings the user in the loop. It is based on the use of an active robot user - a simulated model of a human user. We show how this approach can be used to evaluate and learn parameters of state-of-the-art interactive segmentation systems. We also show how simulated user models can be integrated into the popular max-margin method for parameter learning and propose an algorithm to solve the resulting optimisation problem.

Web [BibTex]

2009

Web [BibTex]


no image
Learning Probabilistic Models via Bayesian Inverse Planning

Boularias, A., Chaib-Draa, B.

NIPS Workshop on Probabilistic Approaches for Robotics and Control, December 2009 (poster)

PDF Web [BibTex]

PDF Web [BibTex]


no image
Bayesian Quadratic Reinforcement Learning

Hennig, P., Stern, D., Graepel, T.

NIPS Workshop on Probabilistic Approaches for Robotics and Control, December 2009 (poster)

PDF Web [BibTex]

PDF Web [BibTex]


no image
A computational model of human table tennis for robot application

Mülling, K., Peters, J.

In AMS 2009, pages: 57-64, (Editors: Dillmann, R. , J. Beyerer, C. Stiller, M. Zöllner, T. Gindele), Springer, Berlin, Germany, Autonome Mobile Systeme, December 2009 (inproceedings)

Abstract
Table tennis is a difficult motor skill which requires all basic components of a general motor skill learning system. In order to get a step closer to such a generic approach to the automatic acquisition and refinement of table tennis, we study table tennis from a human motor control point of view. We make use of the basic models of discrete human movement phases, virtual hitting points, and the operational timing hypothesis. Using these components, we create a computational model which is aimed at reproducing human-like behavior. We verify the functionality of this model in a physically realistic simulation of a BarrettWAM.

Web DOI [BibTex]

Web DOI [BibTex]


no image
A second order sliding mode controller with polygonal constraints

Dinuzzo, F.

In pages: 6715-6719, IEEE, Piscataway, NJ, USA, 48th IEEE Conference on Decision and Control (CDC), December 2009 (inproceedings)

Abstract
It is presented a discontinuous controller that ensure uniform finite-time zero stabilization of the output for uncertain SISO systems of relative degree two, while keeping the auxiliary system state within a prescribed convex polygon. The proposed method extends applicability of second order sliding modes controllers to the case of uncertain dynamical systems with constraints.

Web DOI [BibTex]

Web DOI [BibTex]


no image
A PAC-Bayesian Approach to Formulation of Clustering Objectives

Seldin, Y., Tishby, N.

In Proceedings of the NIPS 2009 Workshop "Clustering: Science or Art? Towards Principled Approaches", pages: 1-4, NIPS Workshop "Clustering: Science or Art? Towards Principled Approaches", December 2009 (inproceedings)

Abstract
Clustering is a widely used tool for exploratory data analysis. However, the theoretical understanding of clustering is very limited. We still do not have a well-founded answer to the seemingly simple question of “how many clusters are present in the data?”, and furthermore a formal comparison of clusterings based on different optimization objectives is far beyond our abilities. The lack of good theoretical support gives rise to multiple heuristics that confuse the practitioners and stall development of the field. We suggest that the ill-posed nature of clustering problems is caused by the fact that clustering is often taken out of its subsequent application context. We argue that one does not cluster the data just for the sake of clustering it, but rather to facilitate the solution of some higher level task. By evaluation of the clustering’s contribution to the solution of the higher level task it is possible to compare different clusterings, even those obtained by different optimization objectives. In the preceding work it was shown that such an approach can be applied to evaluation and design of co-clustering solutions. Here we suggest that this approach can be extended to other settings, where clustering is applied.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Policy Transfer in Apprenticeship Learning

Boularias, A., Chaib-Draa, B.

NIPS Workshop on Transfer Learning for Structured Data (TLSD-09), December 2009 (poster)

PDF Web [BibTex]

PDF Web [BibTex]


no image
Notes on Graph Cuts with Submodular Edge Weights

Jegelka, S., Bilmes, J.

In pages: 1-6, NIPS Workshop on Discrete Optimization in Machine Learning: Submodularity, Sparsity & Polyhedra (DISCML), December 2009 (inproceedings)

Abstract
Generalizing the cost in the standard min-cut problem to a submodular cost function immediately makes the problem harder. Not only do we prove NP hardness even for nonnegative submodular costs, but also show a lower bound of (|V |1/3) on the approximation factor for the (s, t) cut version of the problem. On the positive side, we propose and compare three approximation algorithms with an overall approximation factor of O(min{|V |,p|E| log |V |}) that appear to do well in practice.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Learning new basic Movements for Robotics

Kober, J., Peters, J.

In AMS 2009, pages: 105-112, (Editors: Dillmann, R. , J. Beyerer, C. Stiller, M. Zöllner, T. Gindele), Springer, Berlin, Germany, Autonome Mobile Systeme, December 2009 (inproceedings)

Abstract
Obtaining novel skills is one of the most important problems in robotics. Machine learning techniques may be a promising approach for automatic and autonomous acquisition of movement policies. However, this requires both an appropriate policy representation and suitable learning algorithms. Employing the most recent form of the dynamical systems motor primitives originally introduced by Ijspeert et al. [1], we show how both discrete and rhythmic tasks can be learned using a concerted approach of both imitation and reinforcement learning, and present our current best performing learning algorithms. Finally, we show that it is possible to include a start-up phase in rhythmic primitives. We apply our approach to two elementary movements, i.e., Ball-in-a-Cup and Ball-Paddling, which can be learned on a real Barrett WAM robot arm at a pace similar to human learning.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
From Motor Learning to Interaction Learning in Robots

Sigaud, O., Peters, J.

In Proceedings of 7ème Journées Nationales de la Recherche en Robotique, pages: 189-195, JNRR, November 2009 (inproceedings)

Abstract
The number of advanced robot systems has been increasing in recent years yielding a large variety of versatile designs with many degrees of freedom. These robots have the potential of being applicable in uncertain tasks outside well-structured industrial settings. However, the complexity of both systems and tasks is often beyond the reach of classical robot programming methods. As a result, a more autonomous solution for robot task acquisition is needed where robots adaptively adjust their behaviour to the encountered situations and required tasks. Learning approaches pose one of the most appealing ways to achieve this goal. However, while learning approaches are of high importance for robotics, we cannot simply use off-the-shelf methods from the machine learning community as these usually do not scale into the domains of robotics due to excessive computational cost as well as a lack of scalability. Instead, domain appropriate approaches are needed. We focus here on several core domains of robot learning. For accurate task execution, we need motor learning capabilities. For fast learning of the motor tasks, imitation learning offers the most promising approach. Self improvement requires reinforcement learning approaches that scale into the domain of complex robots. Finally, for efficient interaction of humans with robot systems, we will need a form of interaction learning. This contribution provides a general introduction to these issues and briefly presents the contributions of the related book chapters to the corresponding research topics.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Detection of objects in noisy images and site percolation on square lattices

Langovoy, M., Wittich, O.

(2009-035), EURANDOM, Technische Universiteit Eindhoven, November 2009 (techreport)

Abstract
We propose a novel probabilistic method for detection of objects in noisy images. The method uses results from percolation and random graph theories. We present an algorithm that allows to detect objects of unknown shapes in the presence of random noise. Our procedure substantially differs from wavelets-based algorithms. The algorithm has linear complexity and exponential accuracy and is appropriate for real-time systems. We prove results on consistency and algorithmic complexity of our procedure.

PDF [BibTex]

PDF [BibTex]


no image
An Incremental GEM Framework for Multiframe Blind Deconvolution, Super-Resolution, and Saturation Correction

Harmeling, S., Sra, S., Hirsch, M., Schölkopf, B.

(187), Max Planck Institute for Biological Cybernetics, Tübingen, Germany, November 2009 (techreport)

Abstract
We develop an incremental generalized expectation maximization (GEM) framework to model the multiframe blind deconvolution problem. A simplistic version of this problem was recently studied by Harmeling etal~cite{harmeling09}. We solve a more realistic version of this problem which includes the following major features: (i) super-resolution ability emph{despite} noise and unknown blurring; (ii) saturation-correction, i.e., handling of overexposed pixels that can otherwise confound the image processing; and (iii) simultaneous handling of color channels. These features are seamlessly integrated into our incremental GEM framework to yield simple but efficient multiframe blind deconvolution algorithms. We present technical details concerning critical steps of our algorithms, especially to highlight how all operations can be written using matrix-vector multiplications. We apply our algorithm to real-world images from astronomy and super resolution tasks. Our experimental results show that our methods yield improve d resolution and deconvolution at the same time.

PDF [BibTex]

PDF [BibTex]


no image
Efficient Filter Flow for Space-Variant Multiframe Blind Deconvolution

Hirsch, M., Sra, S., Schölkopf, B., Harmeling, S.

(188), Max Planck Institute for Biological Cybernetics, Tübingen, Germany, November 2009 (techreport)

Abstract
Ultimately being motivated by facilitating space-variant blind deconvolution, we present a class of linear transformations, that are expressive enough for space-variant filters, but at the same time especially designed for efficient matrix-vector-multiplications. Successful results on astronomical imaging through atmospheric turbulences and on noisy magnetic resonance images of constantly moving objects demonstrate the practical significance of our approach.

PDF [BibTex]

PDF [BibTex]


no image
Detecting Objects in Large Image Collections and Videos by Efficient Subimage Retrieval

Lampert, CH.

In ICCV 2009, pages: 987-994, IEEE Computer Society, Piscataway, NJ, USA, Twelfth IEEE International Conference on Computer Vision, October 2009 (inproceedings)

Abstract
We study the task of detecting the occurrence of objects in large image collections or in videos, a problem that combines aspects of content based image retrieval and object localization. While most previous approaches are either limited to special kinds of queries, or do not scale to large image sets, we propose a new method, efficient subimage retrieval (ESR), which is at the same time very flexible and very efficient. Relying on a two-layered branch-and-bound setup, ESR performs object-based image retrieval in sets of 100,000 or more images within seconds. An extensive evaluation on several datasets shows that ESR is not only very fast, but it also achieves detection accuracies that are on par with or superior to previously published methods for object-based image retrieval.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Algebraic polynomials and moments of stochastic integrals

Langovoy, M.

(2009-031), EURANDOM, Technische Universiteit Eindhoven, October 2009 (techreport)

PDF [BibTex]

PDF [BibTex]


no image
Toward a Theory of Consciousness

Tononi, G., Balduzzi, D.

In The Cognitive Neurosciences, pages: 1201-1220, (Editors: Gazzaniga, M.S.), MIT Press, Cambridge, MA, USA, October 2009 (inbook)

Web [BibTex]

Web [BibTex]


no image
A new non-monotonic algorithm for PET image reconstruction

Sra, S., Kim, D., Dhillon, I., Schölkopf, B.

In IEEE - Nuclear Science Symposium Conference Record (NSS/MIC), 2009, pages: 2500-2502, (Editors: B Yu), IEEE, Piscataway, NJ, USA, IEEE Nuclear Science Symposium and Medical Imaging Conference, October 2009 (inproceedings)

Abstract
Maximizing some form of Poisson likelihood (either with or without penalization) is central to image reconstruction algorithms in emission tomography. In this paper we introduce NMML, a non-monotonic algorithm for maximum likelihood PET image reconstruction. NMML offers a simple and flexible procedure that also easily incorporates standard convex regular-ization for doing penalized likelihood estimation. A vast number image reconstruction algorithms have been developed for PET, and new ones continue to be designed. Among these, methods based on the expectation maximization (EM) and ordered-subsets (OS) framework seem to have enjoyed the greatest popularity. Our method NMML differs fundamentally from methods based on EM: i) it does not depend on the concept of optimization transfer (or surrogate functions); and ii) it is a rapidly converging nonmonotonic descent procedure. The greatest strengths of NMML, however, are its simplicity, efficiency, and scalability, which make it especially attractive for tomograph ic reconstruction. We provide a theoretical analysis NMML, and empirically observe it to outperform standard EM based methods, sometimes by orders of magnitude. NMML seamlessly allows integreation of penalties (regularizers) in the likelihood. This ability can prove to be crucial, especially because with the rapidly rising importance of combined PET/MR scanners, one will want to include more “prior” knowledge into the reconstruction.

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Approximation Algorithms for Tensor Clustering

Jegelka, S., Sra, S., Banerjee, A.

In Algorithmic Learning Theory: 20th International Conference, pages: 368-383, (Editors: Gavalda, R. , G. Lugosi, T. Zeugmann, S. Zilles), Springer, Berlin, Germany, ALT, October 2009 (inproceedings)

Abstract
We present the first (to our knowledge) approximation algo- rithm for tensor clustering—a powerful generalization to basic 1D clustering. Tensors are increasingly common in modern applications dealing with complex heterogeneous data and clustering them is a fundamental tool for data analysis and pattern discovery. Akin to their 1D cousins, common tensor clustering formulations are NP-hard to optimize. But, unlike the 1D case no approximation algorithms seem to be known. We address this imbalance and build on recent co-clustering work to derive a tensor clustering algorithm with approximation guarantees, allowing metrics and divergences (e.g., Bregman) as objective functions. Therewith, we answer two open questions by Anagnostopoulos et al. (2008). Our analysis yields a constant approximation factor independent of data size; a worst-case example shows this factor to be tight for Euclidean co-clustering. However, empirically the approximation factor is observed to be conservative, so our method can also be used in practice.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Active learning using mean shift optimization for robot grasping

Kroemer, O., Detry, R., Piater, J., Peters, J.

In Proceedings of the 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2009), pages: 2610-2615, IEEE Service Center, Piscataway, NJ, USA, 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), October 2009 (inproceedings)

Abstract
When children learn to grasp a new object, they often know several possible grasping points from observing a parent‘s demonstration and subsequently learn better grasps by trial and error. From a machine learning point of view, this process is an active learning approach. In this paper, we present a new robot learning framework for reproducing this ability in robot grasping. For doing so, we chose a straightforward approach: first, the robot observes a few good grasps by demonstration and learns a value function for these grasps using Gaussian process regression. Subsequently, it chooses grasps which are optimal with respect to this value function using a mean-shift optimization approach, and tries them out on the real system. Upon every completed trial, the value function is updated, and in the following trials it is more likely to choose even better grasping points. This method exhibits fast learning due to the data-efficiency of Gaussian process regression framework and the fact th at t he mean-shift method provides maxima of this cost function. Experiments were repeatedly carried out successfully on a real robot system. After less than sixty trials, our system has adapted its grasping policy to consistently exhibit successful grasps.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Sparse online model learning for robot control with support vector regression

Nguyen-Tuong, D., Schölkopf, B., Peters, J.

In Proceedings of the 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2009), pages: 3121-3126, IEEE Service Center, Piscataway, NJ, USA, 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), October 2009 (inproceedings)

Abstract
The increasing complexity of modern robots makes it prohibitively hard to accurately model such systems as required by many applications. In such cases, machine learning methods offer a promising alternative for approximating such models using measured data. To date, high computational demands have largely restricted machine learning techniques to mostly offline applications. However, making the robots adaptive to changes in the dynamics and to cope with unexplored areas of the state space requires online learning. In this paper, we propose an approximation of the support vector regression (SVR) by sparsification based on the linear independency of training data. As a result, we obtain a method which is applicable in real-time online learning. It exhibits competitive learning accuracy when compared with standard regression techniques, such as nu-SVR, Gaussian process regression (GPR) and locally weighted projection regression (LWPR).

Web DOI [BibTex]

Web DOI [BibTex]


no image
Clinical PET/MRI-System and Its Applications with MRI Based Attenuation Correction

Kolb, A., Hofmann, M., Sossi, V., Wehrl, H., Sauter, A., Schmid, A., Schlemmer, H., Claussen, C., Pichler, B.

IEEE Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC 2009), 2009, pages: 1, October 2009 (poster)

Abstract
Clinical PET/MRI is an emerging new hybrid imaging modality. In addition to provide an unique possibility for multifunctional imaging with temporally and spatially matched data, it also provides anatomical information that can also be used for attenuation correction with no radiation exposure to the subjects. A plus of combined compared to sequential PET and MR imaging is the reduction of total scan time. Here we present our initial experience with a hybrid brain PET/MRI system. Due to the ethical approval patient scans could only be performed after a diagnostic PET/CT. We estimate that in approximately 50% of the cases PET/MRI was of superior diagnostic value compared to PET/CT and was able to provide additional information, such as DTI, spectroscopy and Time Of Flight (TOF) angiography. Here we present 3 patient cases in oncology, a retropharyngeal carcinoma in neurooncology, a relapsing meningioma and in neurology a pharyngeal carcinoma in addition to an infraction of the right hemisphere. For quantitative PET imaging attenuation correction is obligatory. In current PET/MRI setup we used our MRI based atlas method for calculating the mu-map for attenuation correction. MR-based attenuation correction accuracy was quantitatively compared to CT-based PET attenuation correction. Extensive studies to assess potential mutual interferences between PET and MR imaging modalities as well as NEMA measurements have been performed. The first patient studies as well as the phantom tests clearly demonstrated the overall good imaging performance of this first human PET/MRI system. Ongoing work concentrates on advanced normalization and reconstruction methods incorporating count-rate based algorithms.

Web [BibTex]

Web [BibTex]


no image
Causality Discovery with Additive Disturbances: An Information-Theoretical Perspective

Zhang, K., Hyvärinen, A.

In Machine Learning and Knowledge Discovery in Databases, pages: 570-585, (Editors: Buntine, W. , M. Grobelnik, D. Mladenić, J. Shawe-Taylor ), Springer, Berlin, Germany, European Conference on Machine Learning and Knowledge Discovery in Databases: Part II (ECML PKDD '09), September 2009 (inproceedings)

Abstract
We consider causally sufficient acyclic causal models in which the relationship among the variables is nonlinear while disturbances have linear effects, and show that three principles, namely, the causal Markov condition (together with the independence between each disturbance and the corresponding parents), minimum disturbance entropy, and mutual independence of the disturbances, are equivalent. This motivates new and more efficient methods for some causal discovery problems. In particular, we propose to use multichannel blind deconvolution, an extension of independent component analysis, to do Granger causality analysis with instantaneous effects. This approach gives more accurate estimates of the parameters and can easily incorporate sparsity constraints. For additive disturbance-based nonlinear causal discovery, we first make use of the conditional independence relationships to obtain the equivalence class; undetermined causal directions are then found by nonlinear regression and pairwise independence tests. This avoids the brute-force search and greatly reduces the computational load.

PDF PDF DOI [BibTex]

PDF PDF DOI [BibTex]


no image
Implicit Wiener Series Analysis of Epileptic Seizure Recordings

Barbero, A., Franz, M., Drongelen, W., Dorronsoro, J., Schölkopf, B., Grosse-Wentrup, M.

In EMBC 2009, pages: 5304-5307, (Editors: Y Kim and B He and G Worrell and X Pan), IEEE Service Center, Piscataway, NJ, USA, 31st Annual International Conference of the IEEE Engineering in Medicine and Biology Society, September 2009 (inproceedings)

Abstract
Implicit Wiener series are a powerful tool to build Volterra representations of time series with any degree of nonlinearity. A natural question is then whether higher order representations yield more useful models. In this work we shall study this question for ECoG data channel relationships in epileptic seizure recordings, considering whether quadratic representations yield more accurate classifiers than linear ones. To do so we first show how to derive statistical information on the Volterra coefficient distribution and how to construct seizure classification patterns over that information. As our results illustrate, a quadratic model seems to provide no advantages over a linear one. Nevertheless, we shall also show that the interpretability of the implicit Wiener series provides insights into the inter-channel relationships of the recordings.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Incorporating Prior Knowledge on Class Probabilities into Local Similarity Measures for Intermodality Image Registration

Hofmann, M., Schölkopf, B., Bezrukov, I., Cahill, N.

In Proceedings of the MICCAI 2009 Workshop on Probabilistic Models for Medical Image Analysis , pages: 220-231, (Editors: W Wells and S Joshi and K Pohl), PMMIA, September 2009 (inproceedings)

Abstract
We present a methodology for incorporating prior knowledge on class probabilities into the registration process. By using knowledge from the imaging modality, pre-segmentations, and/or probabilistic atlases, we construct vectors of class probabilities for each image voxel. By defining new image similarity measures for distribution-valued images, we show how the class probability images can be nonrigidly registered in a variational framework. An experiment on nonrigid registration of MR and CT full-body scans illustrates that the proposed technique outperforms standard mutual information (MI) and normalized mutual information (NMI) based registration techniques when measured in terms of target registration error (TRE) of manually labeled fiducials.

PDF Web [BibTex]

PDF Web [BibTex]


no image
A flowering-time gene network model for association analysis in Arabidopsis thaliana

Klotzbücher, K., Kobayashi, Y., Shervashidze, N., Borgwardt, K., Weigel, D.

2009(39):95-96, German Conference on Bioinformatics (GCB '09), September 2009 (poster)

Abstract
In our project we want to determine a set of single nucleotide polymorphisms (SNPs), which have a major effect on the flowering time of Arabidopsis thaliana. Instead of performing a genome-wide association study on all SNPs in the genome of Arabidopsis thaliana, we examine the subset of SNPs from the flowering-time gene network model. We are interested in how the results of the association study vary when using only the ascertained subset of SNPs from the flowering network model, and when additionally using the information encoded by the structure of the network model. The network model is compiled from the literature by manual analysis and contains genes which have been found to affect the flowering time of Arabidopsis thaliana [Far+08; KW07]. The genes in this model are annotated with the SNPs that are located in these genes, or in near proximity to them. In a baseline comparison between the subset of SNPs from the graph and the set of all SNPs, we omit the structural information and calculate the correlation between the individual SNPs and the flowering time phenotype by use of statistical methods. Through this we can determine the subset of SNPs with the highest correlation to the flowering time. In order to further refine this subset, we include the additional information provided by the network structure by conducting a graph-based feature pre-selection. In the further course of this project we want to validate and examine the resulting set of SNPs and their corresponding genes with experimental methods.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Markerless 3D Face Tracking (DAGM 2009)

Walder, C., Breidt, M., Bülthoff, H., Schölkopf, B., Curio, C.

In Pattern Recognition, Lecture Notes in Computer Science, Vol. 5748 , pages: 41-50, (Editors: J Denzler and G Notni and H Süsse), Springer, Berlin, Germany, 31st Symposium of the German Association for Pattern Recognition (DAGM), September 2009 (inproceedings)

Abstract
We present a novel algorithm for the markerless tracking of deforming surfaces such as faces. We acquire a sequence of 3D scans along with color images at 40Hz. The data is then represented by implicit surface and color functions, using a novel partition-of-unity type method of efficiently combining local regressors using nearest neighbor searches. Both these functions act on the 4D space of 3D plus time, and use temporal information to handle the noise in individual scans. After interactive registration of a template mesh to the first frame, it is then automatically deformed to track the scanned surface, using the variation of both shape and color as features in a dynamic energy minimization problem. Our prototype system yields high-quality animated 3D models in correspondence, at a rate of approximately twenty seconds per timestep. Tracking results for faces and other objects are presented.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Object Localization with Global and Local Context Kernels

Blaschko, M., Lampert, C.

In British Machine Vision Conference 2009, pages: 1-11, BMVC, September 2009 (inproceedings)

Abstract
Recent research has shown that the use of contextual cues significantly improves performance in sliding window type localization systems. In this work, we propose a method that incorporates both global and local context information through appropriately defined kernel functions. In particular, we make use of a weighted combination of kernels defined over local spatial regions, as well as a global context kernel. The relative importance of the context contributions is learned automatically, and the resulting discriminant function is of a form such that localization at test time can be solved efficiently using a branch and bound optimization scheme. By specifying context directly with a kernel learning approach, we achieve high localization accuracy with a simple and efficient representation. This is in contrast to other systems that incorporate context for which expensive inference needs to be done at test time. We show experimentally on the PASCAL VOC datasets that the inclusion of context can significantly improve localization performance, provided the relative contributions of context cues are learned appropriately.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Efficient Sample Reuse in EM-Based Policy Search

Hachiya, H., Peters, J., Sugiyama, M.

In 16th European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, pages: 469-484, (Editors: Buntine, W. , M. Grobelnik, D. Mladenic, J. Shawe-Taylor), Springer, Berlin, Germany, ECML PKDD, September 2009 (inproceedings)

Abstract
Direct policy search is a promising reinforcement learning framework in particular for controlling in continuous, high-dimensional systems such as anthropomorphic robots. Policy search often requires a large number of samples for obtaining a stable policy update estimator due to its high flexibility. However, this is prohibitive when the sampling cost is expensive. In this paper, we extend a EM-based policy search method so that previously collected samples can be efficiently reused. The usefulness of the proposed method, called Reward-weighted Regression with sample Reuse, is demonstrated through a robot learning experiment.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Active Structured Learning for High-Speed Object Detection

Lampert, C., Peters, J.

In DAGM 2009, pages: 221-231, (Editors: Denzler, J. , G. Notni, H. Süsse), Springer, Berlin, Germany, 31st Annual Symposium of the German Association for Pattern Recognition, September 2009 (inproceedings)

Abstract
High-speed smooth and accurate visual tracking of objects in arbitrary, unstructured environments is essential for robotics and human motion analysis. However, building a system that can adapt to arbitrary objects and a wide range of lighting conditions is a challenging problem, especially if hard real-time constraints apply like in robotics scenarios. In this work, we introduce a method for learning a discriminative object tracking system based on the recent structured regression framework for object localization. Using a kernel function that allows fast evaluation on the GPU, the resulting system can process video streams at speed of 100 frames per second or more. Consecutive frames in high speed video sequences are typically very redundant, and for training an object detection system, it is sufficient to have training labels from only a subset of all images. We propose an active learning method that select training examples in a data-driven way, thereby minimizing the required number of training labeling. Experiments on realistic data show that the active learning is superior to previously used methods for dataset subsampling for this task.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Generalized Clustering via Kernel Embeddings

Jegelka, S., Gretton, A., Schölkopf, B., Sriperumbudur, B., von Luxburg, U.

In KI 2009: AI and Automation, Lecture Notes in Computer Science, Vol. 5803, pages: 144-152, (Editors: B Mertsching and M Hund and Z Aziz), Springer, Berlin, Germany, 32nd Annual Conference on Artificial Intelligence (KI), September 2009 (inproceedings)

Abstract
We generalize traditional goals of clustering towards distinguishing components in a non-parametric mixture model. The clusters are not necessarily based on point locations, but on higher order criteria. This framework can be implemented by embedding probability distributions in a Hilbert space. The corresponding clustering objective is very general and relates to a range of common clustering concepts.

PDF PDF Web DOI [BibTex]

PDF PDF Web DOI [BibTex]


no image
Discovering Temporal Patterns of Differential Gene Expression in Microarray Time Series

Stegle, O., Denby, KJ., Wild, DL., McHattie, S., Mead, A., Ghahramani, Z., Borgwardt, KM.

In Proceedings of the German Conference on Bioinformatics 2009 (GCB 2009), pages: 133-142, (Editors: Grosse, I. , S. Neumann, S. Posch, F. Schreiber, P. F. Stadler), Gesellschaft für Informatik, Bonn, Germany, German Conference on Bioinformatics (GCB '09), September 2009 (inproceedings)

Abstract
A wealth of time series of microarray measurements have become available over recent years. Several two-sample tests for detecting differential gene expression in these time series have been defined, but they can only answer the question whether a gene is differentially expressed across the whole time series, not in which intervals it is differentially expressed. In this article, we propose a Gaussian process based approach for studying these dynamics of differential gene expression. In experiments on Arabidopsis thaliana gene expression levels, our novel technique helps us to uncover that the family of WRKY transcription factors appears to be involved in the early response to infection by a fungal pathogen.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Initial Data from a first PET/MRI-System and its Applications in Clinical Studies Using MRI Based Attenuation Correction

Kolb, A., Hofmann, M., Sossi, V., Wehrl, H., Sauter, A., Schmid, A., Judenhofer, M., Schlemmer, H., Claussen, C., Pichler, B.

2009 World Molecular Imaging Congress, 2009, pages: 1200, September 2009 (poster)

Web [BibTex]

Web [BibTex]


no image
A High-Speed Object Tracker from Off-the-Shelf Components

Lampert, C., Peters, J.

First IEEE Workshop on Computer Vision for Humanoid Robots in Real Environments at ICCV 2009, 1, pages: 1, September 2009 (poster)

Abstract
We introduce RTblob, an open-source real-time vision system for 3D object detection that achieves over 200 Hz tracking speed with only off-the-shelf hardware component. It allows fast and accurate tracking of colored objects in 3D without expensive and often custom-built hardware, instead making use of the PC graphics cards for the necessary image processing operations.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Estimating Critical Stimulus Features from Psychophysical Data: The Decision-Image Technique Applied to Human Faces

Macke, J., Wichmann, F.

Journal of Vision, 9(8):31, 9th Annual Meeting of the Vision Sciences Society (VSS), August 2009 (poster)

Abstract
One of the main challenges in the sensory sciences is to identify the stimulus features on which the sensory systems base their computations: they are a pre-requisite for computational models of perception. We describe a technique---decision-images--- for extracting critical stimulus features based on logistic regression. Rather than embedding the stimuli in noise, as is done in classification image analysis, we want to infer the important features directly from physically heterogeneous stimuli. A Decision-image not only defines the critical region-of-interest within a stimulus but is a quantitative template which defines a direction in stimulus space. Decision-images thus enable the development of predictive models, as well as the generation of optimized stimuli for subsequent psychophysical investigations. Here we describe our method and apply it to data from a human face discrimination experiment. We show that decision-images are able to predict human responses not only in terms of overall percent correct but are able to predict, for individual observers, the probabilities with which individual faces are (mis-) classified. We then test the predictions of the models using optimized stimuli. Finally, we discuss possible generalizations of the approach and its relationships with other models.

Web DOI [BibTex]

Web DOI [BibTex]


no image
Control Design Based on Analytical Stability Criteria for Optimized Kinesthetic Perception in Scaled Teleoperation

Son, HI., Bhattacharjee, T., Lee, DY.

In ICCAS-SICE International Joint Conference, pages: 3365-3370, IEEE, Piscataway, NJ, USA, ICCAS-SICE International Joint Conference, August 2009 (inproceedings)

Abstract
This paper considers kinesthetic perception as the main performance objective for a scaled teleoperation system, and devises a scheme to optimize it with constraints of position tracking and absolute stability. Analytical criteria for monitoring stability have been derived for position-position, force-position, and four-channel control architectures using Llewellyn's absolute stability criteria. This helps to reduce the optimization complexity and provides an easy and effective design guideline for selecting control gains amongst the range. Optimization results indicate that trade-offs exist among different control architectures. This paper provides guidelines based on application-dependent selection of control scheme.

Web [BibTex]

Web [BibTex]


no image
Expectation Propagation on the Maximum of Correlated Normal Variables

Hennig, P.

Cavendish Laboratory: University of Cambridge, July 2009 (techreport)

Abstract
Many inference problems involving questions of optimality ask for the maximum or the minimum of a finite set of unknown quantities. This technical report derives the first two posterior moments of the maximum of two correlated Gaussian variables and the first two posterior moments of the two generating variables (corresponding to Gaussian approximations minimizing relative entropy). It is shown how this can be used to build a heuristic approximation to the maximum relationship over a finite set of Gaussian variables, allowing approximate inference by Expectation Propagation on such quantities.

Web [BibTex]

Web [BibTex]


no image
Consistent Nonparametric Tests of Independence

Gretton, A., Györfi, L.

(172), Max Planck Institute for Biological Cybernetics, Tübingen, Germany, July 2009 (techreport)

Abstract
Three simple and explicit procedures for testing the independence of two multi-dimensional random variables are described. Two of the associated test statistics (L1, log-likelihood) are defined when the empirical distribution of the variables is restricted to finite partitions. A third test statistic is defined as a kernel-based independence measure. Two kinds of tests are provided. Distribution-free strong consistent tests are derived on the basis of large deviation bounds on the test statistcs: these tests make almost surely no Type I or Type II error after a random sample size. Asymptotically alpha-level tests are obtained from the limiting distribution of the test statistics. For the latter tests, the Type I error converges to a fixed non-zero value alpha, and the Type II error drops to zero, for increasing sample size. All tests reject the null hypothesis of independence if the test statistics become large. The performance of the tests is evaluated experimentally on benchmark data.

PDF [BibTex]

PDF [BibTex]


no image
Semi-supervised Analysis of Human fMRI Data

Shelton, JA., Blaschko, MB., Lampert, CH., Bartels, A.

Berlin Brain Computer Interface Workshop on Advances in Neurotechnology, 2009, pages: 1, July 2009 (poster)

Abstract
Kernel Canonical Correlation Analysis (KCCA) is a general technique for subspace learning that incorporates principal components analysis (PCA) and Fisher linear discriminant analysis (LDA) as special cases. By finding directions that maximize correlation, CCA learns representations tied more closely to underlying process generating the the data and can ignore high-variance noise directions. However, for data where acquisition in a given modality is expensive or otherwise limited, CCA may suffer from small sample effects. We propose to use semisupervised Laplacian regularization to utilize data that are present in only one modality. This approach is able to find highly correlated directions that also lie along the data manifold, resulting in a more robust estimate of correlated subspaces. Functional magnetic resonance imaging (fMRI) acquired data are naturally amenable to subspace techniques as data are well aligned. fMRI data of the human brain are a particularly interesting candidate. In this study we implemented various supervised and semi-supervised versions of CCA on human fMRI data, with regression to single and multivariate labels (corresponding to video content subjects viewed during the image acquisition). In each variate condition, the semi-supervised variants of CCA performed better than the supervised variants, including a supervised variant with Laplacian regularization. We additionally analyze the weights learned by the regression in order to infer brain regions that are important to different types of visual processing.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Active learning for classification of remote sensing images

Bruzzone, L., Persello, C.

In pages: III-693-III-696 , IEEE, Piscataway, NJ, USA, IEEE International Geoscience and Remote Sensing Symposium (IGARSS), July 2009 (inproceedings)

Abstract
This paper presents an analysis of active learning techniques for the classification of remote sensing images and proposes a novel active learning method based on support vector machines (SVMs). The proposed method exploits a query function for the inclusion of batches of unlabeled samples in the training set, which is based on the evaluation of two criteria: uncertainty and diversity. This query function adopts a stochastic approach to the selection of unlabeled samples, which is based on a function of uncertainty estimated from the distribution of errors on the validation set (which is assumed available for the model selection of the SVM classifier). Experimental results carried out on a very high resolution image confirm the effectiveness of the proposed active learning technique, which results more accurate than standard methods.

Web DOI [BibTex]

Web DOI [BibTex]


no image
A novel approach to the selection of spatially invariant features for classification of hyperspectral images

Persello, C., Bruzzone, L.

In pages: II-61-II-64 , IEEE, Piscataway, NJ, USA, IEEE International Geoscience and Remote Sensing Symposium (IGARSS), July 2009 (inproceedings)

Abstract
This paper presents a novel approach to feature selection for the classification of hyperspectral images. The proposed approach aims at selecting a subset of the original set of features that exhibits two main properties: i) high capability to discriminate among the considered classes, ii) high invariance in the spatial domain of the investigated scene. This approach results in a more robust classification system with improved generalization properties with respect to standard feature-selection methods. The feature selection is accomplished by defining a multi-objective criterion function made up of two terms: i) a term that measures the class separability, ii) a term that evaluates the spatial invariance of the selected features. In order to assess the spatial invariance of the feature subset we propose both a supervised method and a semisupervised method (which choice depends on the available reference data). The multi-objective problem is solved by an evolutionary algorithm that estimates the set of Pareto-optimal solutions. Experiments carried out on a hyperspectral image acquired by the Hyperion sensor on a complex area confirmed the effectiveness of the proposed approach.

Web DOI [BibTex]

Web DOI [BibTex]


no image
Varieties of Justification in Machine Learning

Corfield, D.

In Proceedings of Multiplicity and Unification in Statistics and Probability, pages: 1-10, Multiplicity and Unification in Statistics and Probability, June 2009 (inproceedings)

Abstract
The field of machine learning has flourished over the past couple of decades. With huge amounts of data available, efficient algorithms can learn to extrapolate from their training sets to become very accurate classifiers. For example, it is straightforward now to develop classifiers which achieve accuracies of around 99% on databases of handwritten digits. Now these algorithms have been devised by theorists who arrive at the problem of machine learning with a range of different philosophical outlooks on the subject of inductive reasoning. This has led to a wide range of theoretical rationales for their work. In this talk I shall classify the different forms of justification for inductive machine learning into four kinds, and make some comparisons between them. With little by way of theoretical knowledge to aid in the learning tasks, while the relevance of these justificatory approaches for the inductive reasoning of the natural sciences is questionable, certain issues surrounding the presuppositions of inductive reasoning are brought sharply into focus. In particular, Frequentist, Bayesian and MDL outlooks can be compared.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Effects of Stimulus Type and of Error-Correcting Code Design on BCI Speller Performance

Hill, J., Farquhar, J., Martens, S., Biessmann, F., Schölkopf, B.

In Advances in neural information processing systems 21, pages: 665-672, (Editors: D Koller and D Schuurmans and Y Bengio and L Bottou), Curran, Red Hook, NY, USA, 22nd Annual Conference on Neural Information Processing Systems (NIPS), June 2009 (inproceedings)

Abstract
From an information-theoretic perspective, a noisy transmission system such as a visual Brain-Computer Interface (BCI) speller could benefit from the use of errorcorrecting codes. However, optimizing the code solely according to the maximal minimum-Hamming-distance criterion tends to lead to an overall increase in target frequency of target stimuli, and hence a significantly reduced average target-to-target interval (TTI), leading to difficulties in classifying the individual event-related potentials (ERPs) due to overlap and refractory effects. Clearly any change to the stimulus setup must also respect the possible psychophysiological consequences. Here we report new EEG data from experiments in which we explore stimulus types and codebooks in a within-subject design, finding an interaction between the two factors. Our data demonstrate that the traditional, rowcolumn code has particular spatial properties that lead to better performance than one would expect from its TTIs and Hamming-distances alone, but nonetheless error-correcting codes can improve performance provided the right stimulus type is used.

PDF PDF Web [BibTex]

PDF PDF Web [BibTex]


no image
Influence of graph construction on graph-based clustering measures

Maier, M., von Luxburg, U., Hein, M.

In Advances in neural information processing systems 21, pages: 1025-1032, (Editors: Koller, D. , D. Schuurmans, Y. Bengio, L. Bottou), Curran, Red Hook, NY, USA, Twenty-Second Annual Conference on Neural Information Processing Systems (NIPS), June 2009 (inproceedings)

Abstract
Graph clustering methods such as spectral clustering are defined for general weighted graphs. In machine learning, however, data often is not given in form of a graph, but in terms of similarity (or distance) values between points. In this case, first a neighborhood graph is constructed using the similarities between the points and then a graph clustering algorithm is applied to this graph. In this paper we investigate the influence of the construction of the similarity graph on the clustering results. We first study the convergence of graph clustering criteria such as the normalized cut (Ncut) as the sample size tends to infinity. We find that the limit expressions are different for different types of graph, for example the r-neighborhood graph or the k-nearest neighbor graph. In plain words: Ncut on a kNN graph does something systematically different than Ncut on an r-neighborhood graph! This finding shows that graph clustering criteria cannot be studied independently of the kind of graph they are applied to. We also provide examples which show that these differences can be observed for toy and real data already for rather small sample sizes.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Local Gaussian Process Regression for Real Time Online Model Learning and Control

Nguyen-Tuong, D., Seeger, M., Peters, J.

In Advances in neural information processing systems 21, pages: 1193-1200, (Editors: Koller, D. , D. Schuurmans, Y. Bengio, L. Bottou), Curran, Red Hook, NY, USA, Twenty-Second Annual Conference on Neural Information Processing Systems (NIPS), June 2009 (inproceedings)

Abstract
Learning in real-time applications, e.g., online approximation of the inverse dynamics model for model-based robot control, requires fast online regression techniques. Inspired by local learning, we propose a method to speed up standard Gaussian Process regression (GPR) with local GP models (LGP). The training data is partitioned in local regions, for each an individual GP model is trained. The prediction for a query point is performed by weighted estimation using nearby local models. Unlike other GP approximations, such as mixtures of experts, we use a distance based measure for partitioning of the data and weighted prediction. The proposed method achieves online learning and prediction in real-time. Comparisons with other nonparametric regression methods show that LGP has higher accuracy than LWPR and close to the performance of standard GPR and nu-SVR.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Detecting the Direction of Causal Time Series

Peters, J., Janzing, D., Gretton, A., Schölkopf, B.

In Proceedings of the 26th International Conference on Machine Learning, pages: 801-808, (Editors: A Danyluk and L Bottou and ML Littman), ACM Press, New York, NY, USA, ICML, June 2009 (inproceedings)

Abstract
We propose a method that detects the true direction of time series, by fitting an autoregressive moving average model to the data. Whenever the noise is independent of the previous samples for one ordering of the observations, but dependent for the opposite ordering, we infer the former direction to be the true one. We prove that our method works in the population case as long as the noise of the process is not normally distributed (for the latter case, the direction is not identificable). A new and important implication of our result is that it confirms a fundamental conjecture in causal reasoning - if after regression the noise is independent of signal for one direction and dependent for the other, then the former represents the true causal direction - in the case of time series. We test our approach on two types of data: simulated data sets conforming to our modeling assumptions, and real world EEG time series. Our method makes a decision for a significant fraction of both data sets, and these decisions are mostly correct. For real world data, our approach outperforms alternative solutions to the problem of time direction recovery.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Text Clustering with Mixture of von Mises-Fisher Distributions

Sra, S., Banerjee, A., Ghosh, J., Dhillon, I.

In Text mining: classification, clustering, and applications, pages: 121-161, Chapman & Hall/CRC data mining and knowledge discovery series, (Editors: Srivastava, A. N. and Sahami, M.), CRC Press, Boca Raton, FL, USA, June 2009 (inbook)

Web DOI [BibTex]

Web DOI [BibTex]


no image
Learning To Detect Unseen Object Classes by Between-Class Attribute Transfer

Lampert, C., Nickisch, H., Harmeling, S.

In CVPR 2009, pages: 951-958, IEEE Service Center, Piscataway, NJ, USA, IEEE Computer Society Conference on Computer Vision and Pattern Recognition, June 2009 (inproceedings)

Abstract
We study the problem of object classification when training and test classes are disjoint, i.e. no training examples of the target classes are available. This setup has hardly been studied in computer vision research, but it is the rule rather than the exception, because the world contains tens of thousands of different object classes and for only a very few of them image, collections have been formed and annotated with suitable class labels. In this paper, we tackle the problem by introducing attribute-based classification. It performs object detection based on a human-specified high-level description of the target objects instead of training images. The description consists of arbitrary semantic attributes, like shape, color or even geographic information. Because such properties transcend the specific learning task at hand, they can be pre-learned, e.g. from image datasets unrelated to the current task. Afterwards, new classes can be detected based on their attribute representation, without the need for a new training phase. In order to evaluate our method and to facilitate research in this area, we have assembled a new large-scale dataset, ldquoAnimals with Attributesrdquo, of over 30,000 animal images that match the 50 classes in Osherson's classic table of how strongly humans associate 85 semantic attributes with animal classes. Our experiments show that by using an attribute layer it is indeed possible to build a learning object detection system that does not require any training images of the target classes.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Learning Taxonomies by Dependence Maximization

Blaschko, M., Gretton, A.

In Advances in neural information processing systems 21, pages: 153-160, (Editors: Koller, D. , D. Schuurmans, Y. Bengio, L. Bottou), Curran, Red Hook, NY, USA, Twenty-Second Annual Conference on Neural Information Processing Systems (NIPS), June 2009 (inproceedings)

Abstract
We introduce a family of unsupervised algorithms, numerical taxonomy clustering, to simultaneously cluster data, and to learn a taxonomy that encodes the relationship between the clusters. The algorithms work by maximizing the dependence between the taxonomy and the original data. The resulting taxonomy is a more informative visualization of complex data than simple clustering; in addition, taking into account the relations between different clusters is shown to substantially improve the quality of the clustering, when compared with state-ofthe-art algorithms in the literature (both spectral clustering and a previous dependence maximization approach). We demonstrate our algorithm on image and text data.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Learning object-specific grasp affordance densities

Detry, R., Baseski, E., Popovic, M., Touati, Y., Krüger, N., Kroemer, O., Peters, J., Piater, J.

In 8th IEEE International Conference on Development and Learning, pages: 1-7, IEEE Service Center, Piscataway, NJ, USA, ICDL, June 2009 (inproceedings)

Abstract
This paper addresses the issue of learning and representing object grasp affordances, i.e. object-gripper relative configurations that lead to successful grasps. The purpose of grasp affordances is to organize and store the whole knowledge that an agent has about the grasping of an object, in order to facilitate reasoning on grasping solutions and their achievability. The affordance representation consists in a continuous probability density function defined on the 6D gripper pose space-3D position and orientation-, within an object-relative reference frame. Grasp affordances are initially learned from various sources, e.g. from imitation or from visual cues, leading to grasp hypothesis densities. Grasp densities are attached to a learned 3D visual object model, and pose estimation of the visual model allows a robotic agent to execute samples from a grasp hypothesis density under various object poses. Grasp outcomes are used to learn grasp empirical densities, i.e. grasps that have been confirmed through experience. We show the result of learning grasp hypothesis densities from both imitation and visual cues, and present grasp empirical densities learned from physical experience by a robot.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]