Header logo is ei


2016


no image
Causal and statistical learning

Schölkopf, B., Janzing, D., Lopez-Paz, D.

Oberwolfach Reports, 13(3):1896-1899, (Editors: A. Christmann and K. Jetter and S. Smale and D.-X. Zhou), 2016 (conference)

DOI [BibTex]

2016

DOI [BibTex]


no image
Bootstrat: Population Informed Bootstrapping for Rare Variant Tests

Huang, H., Peloso, G. M., Howrigan, D., Rakitsch, B., Simon-Gabriel, C. J., Goldstein, J. I., Daly, M. J., Borgwardt, K., Neale, B. M.

bioRxiv, 2016, preprint (article)

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Probabilistic Movement Models Show that Postural Control Precedes and Predicts Volitional Motor Control

Rueckert, E., Camernik, J., Peters, J., Babic, J.

Nature PG: Scientific Reports, 6(Article number: 28455), 2016 (article)

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Learning Taxonomy Adaptation in Large-scale Classification

Babbar, R., Partalas, I., Gaussier, E., Amini, M., Amblard, C.

Journal of Machine Learning Research, 17(98):1-37, 2016 (article)

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Multi-view learning on multiparametric PET/MRI quantifies intratumoral heterogeneity and determines therapy efficacy

Katiyar, P., Divine, M. R., Kohlhofer, U., Quintanilla-Martinez, L., Siegemund, M., Pfizenmaier, K., Kontermann, R., Pichler, B. J., Disselhorst, J. A.

World Molecular Imaging Conference, 2016 (talk)

link (url) [BibTex]

link (url) [BibTex]


no image
Hippocampal neural events predict ongoing brain-wide BOLD activity

Besserve, M., Logothetis, N. K.

47th Annual Meeting of the Society for Neuroscience (Neuroscience), 2016 (poster)

[BibTex]

[BibTex]


no image
BOiS—Berlin Object in Scene Database: Controlled Photographic Images for Visual Search Experiments with Quantified Contextual Priors

Mohr, J., Seyfarth, J., Lueschow, A., Weber, J. E., Wichmann, F. A., Obermayer, K.

Frontiers in Psychology, 2016 (article)

DOI [BibTex]

DOI [BibTex]


no image
Preface to the ACM TIST Special Issue on Causal Discovery and Inference

Zhang, K., Li, J., Bareinboim, E., Schölkopf, B., Pearl, J.

ACM Transactions on Intelligent Systems and Technologies, 7(2):article no. 17, 2016 (article)

DOI [BibTex]

DOI [BibTex]


no image
Recurrent Spiking Networks Solve Planning Tasks

Rueckert, E., Kappel, D., Tanneberg, D., Pecevski, D., Peters, J.

Nature PG: Scientific Reports, 6(Article number: 21142), 2016 (article)

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Bio-inspired feedback-circuit implementation of discrete, free energy optimizing, winner-take-all computations

Genewein, T, Braun, DA

Biological Cybernetics, 110(2):135–150, June 2016 (article)

Abstract
Bayesian inference and bounded rational decision-making require the accumulation of evidence or utility, respectively, to transform a prior belief or strategy into a posterior probability distribution over hypotheses or actions. Crucially, this process cannot be simply realized by independent integrators, since the different hypotheses and actions also compete with each other. In continuous time, this competitive integration process can be described by a special case of the replicator equation. Here we investigate simple analog electric circuits that implement the underlying differential equation under the constraint that we only permit a limited set of building blocks that we regard as biologically interpretable, such as capacitors, resistors, voltage-dependent conductances and voltage- or current-controlled current and voltage sources. The appeal of these circuits is that they intrinsically perform normalization without requiring an explicit divisive normalization. However, even in idealized simulations, we find that these circuits are very sensitive to internal noise as they accumulate error over time. We discuss in how far neural circuits could implement these operations that might provide a generic competitive principle underlying both perception and action.

DOI [BibTex]

DOI [BibTex]


no image
Decision-Making under Ambiguity Is Modulated by Visual Framing, but Not by Motor vs. Non-Motor Context: Experiments and an Information-Theoretic Ambiguity Model

Grau-Moya, J, Ortega, PA, Braun, DA

PLoS ONE, 11(4):1-21, April 2016 (article)

Abstract
A number of recent studies have investigated differences in human choice behavior depending on task framing, especially comparing economic decision-making to choice behavior in equivalent sensorimotor tasks. Here we test whether decision-making under ambiguity exhibits effects of task framing in motor vs. non-motor context. In a first experiment, we designed an experience-based urn task with varying degrees of ambiguity and an equivalent motor task where subjects chose between hitting partially occluded targets. In a second experiment, we controlled for the different stimulus design in the two tasks by introducing an urn task with bar stimuli matching those in the motor task. We found ambiguity attitudes to be mainly influenced by stimulus design. In particular, we found that the same subjects tended to be ambiguity-preferring when choosing between ambiguous bar stimuli, but ambiguity-avoiding when choosing between ambiguous urn sample stimuli. In contrast, subjects’ choice pattern was not affected by changing from a target hitting task to a non-motor context when keeping the stimulus design unchanged. In both tasks subjects’ choice behavior was continuously modulated by the degree of ambiguity. We show that this modulation of behavior can be explained by an information-theoretic model of ambiguity that generalizes Bayes-optimal decision-making by combining Bayesian inference with robust decision-making under model uncertainty. Our results demonstrate the benefits of information-theoretic models of decision-making under varying degrees of ambiguity for a given context, but also demonstrate the sensitivity of ambiguity attitudes across contexts that theoretical models struggle to explain.

DOI [BibTex]

2009


no image
Learning an Interactive Segmentation System

Nickisch, H., Kohli, P., Rother, C.

Max Planck Institute for Biological Cybernetics, December 2009 (techreport)

Abstract
Many successful applications of computer vision to image or video manipulation are interactive by nature. However, parameters of such systems are often trained neglecting the user. Traditionally, interactive systems have been treated in the same manner as their fully automatic counterparts. Their performance is evaluated by computing the accuracy of their solutions under some fixed set of user interactions. This paper proposes a new evaluation and learning method which brings the user in the loop. It is based on the use of an active robot user - a simulated model of a human user. We show how this approach can be used to evaluate and learn parameters of state-of-the-art interactive segmentation systems. We also show how simulated user models can be integrated into the popular max-margin method for parameter learning and propose an algorithm to solve the resulting optimisation problem.

Web [BibTex]

2009

Web [BibTex]


no image
Machine Learning for Brain-Computer Interfaces

Hill, NJ.

Mini-Symposia on Assistive Machine Learning for People with Disabilities at NIPS (AMD), December 2009 (talk)

Abstract
Brain-computer interfaces (BCI) aim to be the ultimate in assistive technology: decoding a user‘s intentions directly from brain signals without involving any muscles or peripheral nerves. Thus, some classes of BCI potentially offer hope for users with even the most extreme cases of paralysis, such as in late-stage Amyotrophic Lateral Sclerosis, where nothing else currently allows communication of any kind. Other lines in BCI research aim to restore lost motor function in as natural a way as possible, reconnecting and in some cases re-training motor-cortical areas to control prosthetic, or previously paretic, limbs. Research and development are progressing on both invasive and non-invasive fronts, although BCI has yet to make a breakthrough to widespread clinical application. The high-noise high-dimensional nature of brain-signals, particularly in non-invasive approaches and in patient populations, make robust decoding techniques a necessity. Generally, the approach has been to use relatively simple feature extraction techniques, such as template matching and band-power estimation, coupled to simple linear classifiers. This has led to a prevailing view among applied BCI researchers that (sophisticated) machine-learning is irrelevant since "it doesn‘t matter what classifier you use once you‘ve done your preprocessing right and extracted the right features." I shall show a few examples of how this runs counter to both the empirical reality and the spirit of what needs to be done to bring BCI into clinical application. Along the way I‘ll highlight some of the interesting problems that remain open for machine-learners.

PDF Web Web [BibTex]

PDF Web Web [BibTex]


no image
Learning Probabilistic Models via Bayesian Inverse Planning

Boularias, A., Chaib-Draa, B.

NIPS Workshop on Probabilistic Approaches for Robotics and Control, December 2009 (poster)

PDF Web [BibTex]

PDF Web [BibTex]


no image
Efficient Subwindow Search: A Branch and Bound Framework for Object Localization

Lampert, C., Blaschko, M., Hofmann, T.

IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(12):2129-2142, December 2009 (article)

Abstract
Most successful object recognition systems rely on binary classification, deciding only if an object is present or not, but not providing information on the actual object location. To estimate the object‘s location, one can take a sliding window approach, but this strongly increases the computational cost because the classifier or similarity function has to be evaluated over a large set of candidate subwindows. In this paper, we propose a simple yet powerful branch and bound scheme that allows efficient maximization of a large class of quality functions over all possible subimages. It converges to a globally optimal solution typically in linear or even sublinear time, in contrast to the quadratic scaling of exhaustive or sliding window search. We show how our method is applicable to different object detection and image retrieval scenarios. The achieved speedup allows the use of classifiers for localization that formerly were considered too slow for this task, such as SVMs with a spatial pyramid kernel or nearest-neighbor classifiers based on the chi^2 distance. We demonstrate state-of-the-art localization performance of the resulting systems on the UIUC Cars data set, the PASCAL VOC 2006 data set, and in the PASCAL VOC 2007 competition.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Bayesian Quadratic Reinforcement Learning

Hennig, P., Stern, D., Graepel, T.

NIPS Workshop on Probabilistic Approaches for Robotics and Control, December 2009 (poster)

PDF Web [BibTex]

PDF Web [BibTex]


no image
A computational model of human table tennis for robot application

Mülling, K., Peters, J.

In AMS 2009, pages: 57-64, (Editors: Dillmann, R. , J. Beyerer, C. Stiller, M. Zöllner, T. Gindele), Springer, Berlin, Germany, Autonome Mobile Systeme, December 2009 (inproceedings)

Abstract
Table tennis is a difficult motor skill which requires all basic components of a general motor skill learning system. In order to get a step closer to such a generic approach to the automatic acquisition and refinement of table tennis, we study table tennis from a human motor control point of view. We make use of the basic models of discrete human movement phases, virtual hitting points, and the operational timing hypothesis. Using these components, we create a computational model which is aimed at reproducing human-like behavior. We verify the functionality of this model in a physically realistic simulation of a BarrettWAM.

Web DOI [BibTex]

Web DOI [BibTex]


no image
PAC-Bayesian Approach to Formulation of Clustering Objectives

Seldin, Y.

NIPS Workshop on "Clustering: Science or Art? Towards Principled Approaches", December 2009 (talk)

Abstract
Clustering is a widely used tool for exploratory data analysis. However, the theoretical understanding of clustering is very limited. We still do not have a well-founded answer to the seemingly simple question of "how many clusters are present in the data?", and furthermore a formal comparison of clusterings based on different optimization objectives is far beyond our abilities. The lack of good theoretical support gives rise to multiple heuristics that confuse the practitioners and stall development of the field. We suggest that the ill-posed nature of clustering problems is caused by the fact that clustering is often taken out of its subsequent application context. We argue that one does not cluster the data just for the sake of clustering it, but rather to facilitate the solution of some higher level task. By evaluation of the clustering‘s contribution to the solution of the higher level task it is possible to compare different clusterings, even those obtained by different optimization objectives. In the preceding work it was shown that such an approach can be applied to evaluation and design of co-clustering solutions. Here we suggest that this approach can be extended to other settings, where clustering is applied.

PDF Web Web [BibTex]

PDF Web Web [BibTex]


no image
A second order sliding mode controller with polygonal constraints

Dinuzzo, F.

In pages: 6715-6719, IEEE, Piscataway, NJ, USA, 48th IEEE Conference on Decision and Control (CDC), December 2009 (inproceedings)

Abstract
It is presented a discontinuous controller that ensure uniform finite-time zero stabilization of the output for uncertain SISO systems of relative degree two, while keeping the auxiliary system state within a prescribed convex polygon. The proposed method extends applicability of second order sliding modes controllers to the case of uncertain dynamical systems with constraints.

Web DOI [BibTex]

Web DOI [BibTex]


no image
Generation of three-dimensional random rotations in fitting and matching problems

Habeck, M.

Computational Statistics, 24(4):719-731, December 2009 (article)

Abstract
An algorithm is developed to generate random rotations in three-dimensional space that follow a probability distribution arising in fitting and matching problems. The rotation matrices are orthogonally transformed into an optimal basis and then parameterized using Euler angles. The conditional distributions of the three Euler angles have a very simple form: the two azimuthal angles can be decoupled by sampling their sum and difference from a von Mises distribution; the cosine of the polar angle is exponentially distributed and thus straighforward to generate. Simulation results are shown and demonstrate the effectiveness of the method. The algorithm is compared to other methods for generating random rotations such as a random walk Metropolis scheme and a Gibbs sampling algorithm recently introduced by Green and Mardia. Finally, the algorithm is applied to a probabilistic version of the Procrustes problem of fitting two point sets and applied in the context of protein structure superposition.

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Semi-supervised Kernel Canonical Correlation Analysis of Human Functional Magnetic Resonance Imaging Data

Shelton, JA.

Women in Machine Learning Workshop (WiML), December 2009 (talk)

Abstract
Kernel Canonical Correlation Analysis (KCCA) is a general technique for subspace learning that incorporates principal components analysis (PCA) and Fisher linear discriminant analysis (LDA) as special cases. By finding directions that maximize correlation, KCCA learns representations tied more closely to underlying process generating the the data and can ignore high-variance noise directions. However, for data where acquisition in a given modality is expensive or otherwise limited, KCCA may suffer from small sample effects. We propose to use semi-supervised Laplacian regularization to utilize data that are present in only one modality. This manifold learning approach is able to find highly correlated directions that also lie along the data manifold, resulting in a more robust estimate of correlated subspaces. Functional magnetic resonance imaging (fMRI) acquired data are naturally amenable to subspace techniques as data are well aligned and such data of the human brain are a particularly interesting candidate. In this study we implemented various supervised and semi-supervised versions of KCCA on human fMRI data, with regression to single and multivariate labels (corresponding to video content subjects viewed during the image acquisition). In each variate condition, Laplacian regularization improved performance whereas the semi-supervised variants of KCCA yielded the best performance. We additionally analyze the weights learned by the regression in order to infer brain regions that are important during different types of visual processing.

PDF Web [BibTex]


no image
Adaptive Importance Sampling for Value Function Approximation in Off-policy Reinforcement Learning

Hachiya, H., Akiyama, T., Sugiyama, M., Peters, J.

Neural Networks, 22(10):1399-1410, December 2009 (article)

Abstract
Off-policy reinforcement learning is aimed at efficiently using data samples gathered from a policy that is different from the currently optimized policy. A common approach is to use importance sampling techniques for compensating for the bias of value function estimators caused by the difference between the data-sampling policy and the target policy. However, existing off-policy methods often do not take the variance of the value function estimators explicitly into account and therefore their performance tends to be unstable. To cope with this problem, we propose using an adaptive importance sampling technique which allows us to actively control the trade-off between bias and variance. We further provide a method for optimally determining the trade-off parameter based on a variant of cross-validation. We demonstrate the usefulness of the proposed approach through simulations.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
A PAC-Bayesian Approach to Formulation of Clustering Objectives

Seldin, Y., Tishby, N.

In Proceedings of the NIPS 2009 Workshop "Clustering: Science or Art? Towards Principled Approaches", pages: 1-4, NIPS Workshop "Clustering: Science or Art? Towards Principled Approaches", December 2009 (inproceedings)

Abstract
Clustering is a widely used tool for exploratory data analysis. However, the theoretical understanding of clustering is very limited. We still do not have a well-founded answer to the seemingly simple question of “how many clusters are present in the data?”, and furthermore a formal comparison of clusterings based on different optimization objectives is far beyond our abilities. The lack of good theoretical support gives rise to multiple heuristics that confuse the practitioners and stall development of the field. We suggest that the ill-posed nature of clustering problems is caused by the fact that clustering is often taken out of its subsequent application context. We argue that one does not cluster the data just for the sake of clustering it, but rather to facilitate the solution of some higher level task. By evaluation of the clustering’s contribution to the solution of the higher level task it is possible to compare different clusterings, even those obtained by different optimization objectives. In the preceding work it was shown that such an approach can be applied to evaluation and design of co-clustering solutions. Here we suggest that this approach can be extended to other settings, where clustering is applied.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Policy Transfer in Apprenticeship Learning

Boularias, A., Chaib-Draa, B.

NIPS Workshop on Transfer Learning for Structured Data (TLSD-09), December 2009 (poster)

PDF Web [BibTex]

PDF Web [BibTex]


no image
Notes on Graph Cuts with Submodular Edge Weights

Jegelka, S., Bilmes, J.

In pages: 1-6, NIPS Workshop on Discrete Optimization in Machine Learning: Submodularity, Sparsity & Polyhedra (DISCML), December 2009 (inproceedings)

Abstract
Generalizing the cost in the standard min-cut problem to a submodular cost function immediately makes the problem harder. Not only do we prove NP hardness even for nonnegative submodular costs, but also show a lower bound of (|V |1/3) on the approximation factor for the (s, t) cut version of the problem. On the positive side, we propose and compare three approximation algorithms with an overall approximation factor of O(min{|V |,p|E| log |V |}) that appear to do well in practice.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Guest editorial: special issue on structured prediction

Parker, C., Altun, Y., Tadepalli, P.

Machine Learning, 77(2-3):161-164, December 2009 (article)

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Structured prediction by joint kernel support estimation

Lampert, CH., Blaschko, MB.

Machine Learning, 77(2-3):249-269, December 2009 (article)

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Learning new basic Movements for Robotics

Kober, J., Peters, J.

In AMS 2009, pages: 105-112, (Editors: Dillmann, R. , J. Beyerer, C. Stiller, M. Zöllner, T. Gindele), Springer, Berlin, Germany, Autonome Mobile Systeme, December 2009 (inproceedings)

Abstract
Obtaining novel skills is one of the most important problems in robotics. Machine learning techniques may be a promising approach for automatic and autonomous acquisition of movement policies. However, this requires both an appropriate policy representation and suitable learning algorithms. Employing the most recent form of the dynamical systems motor primitives originally introduced by Ijspeert et al. [1], we show how both discrete and rhythmic tasks can be learned using a concerted approach of both imitation and reinforcement learning, and present our current best performing learning algorithms. Finally, we show that it is possible to include a start-up phase in rhythmic primitives. We apply our approach to two elementary movements, i.e., Ball-in-a-Cup and Ball-Paddling, which can be learned on a real Barrett WAM robot arm at a pace similar to human learning.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
From Motor Learning to Interaction Learning in Robots

Sigaud, O., Peters, J.

In Proceedings of 7ème Journées Nationales de la Recherche en Robotique, pages: 189-195, JNRR, November 2009 (inproceedings)

Abstract
The number of advanced robot systems has been increasing in recent years yielding a large variety of versatile designs with many degrees of freedom. These robots have the potential of being applicable in uncertain tasks outside well-structured industrial settings. However, the complexity of both systems and tasks is often beyond the reach of classical robot programming methods. As a result, a more autonomous solution for robot task acquisition is needed where robots adaptively adjust their behaviour to the encountered situations and required tasks. Learning approaches pose one of the most appealing ways to achieve this goal. However, while learning approaches are of high importance for robotics, we cannot simply use off-the-shelf methods from the machine learning community as these usually do not scale into the domains of robotics due to excessive computational cost as well as a lack of scalability. Instead, domain appropriate approaches are needed. We focus here on several core domains of robot learning. For accurate task execution, we need motor learning capabilities. For fast learning of the motor tasks, imitation learning offers the most promising approach. Self improvement requires reinforcement learning approaches that scale into the domain of complex robots. Finally, for efficient interaction of humans with robot systems, we will need a form of interaction learning. This contribution provides a general introduction to these issues and briefly presents the contributions of the related book chapters to the corresponding research topics.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Detection of objects in noisy images and site percolation on square lattices

Langovoy, M., Wittich, O.

(2009-035), EURANDOM, Technische Universiteit Eindhoven, November 2009 (techreport)

Abstract
We propose a novel probabilistic method for detection of objects in noisy images. The method uses results from percolation and random graph theories. We present an algorithm that allows to detect objects of unknown shapes in the presence of random noise. Our procedure substantially differs from wavelets-based algorithms. The algorithm has linear complexity and exponential accuracy and is appropriate for real-time systems. We prove results on consistency and algorithmic complexity of our procedure.

PDF [BibTex]

PDF [BibTex]


no image
A note on ethical aspects of BCI

Haselager, P., Vlek, R., Hill, J., Nijboer, F.

Neural Networks, 22(9):1352-1357, November 2009 (article)

Abstract
This paper focuses on ethical aspects of BCI, as a research and a clinical tool, that are challenging for practitioners currently working in the field. Specifically, the difficulties involved in acquiring informed consent from locked-in patients are investigated, in combination with an analysis of the shared moral responsibility in BCI teams, and the complications encountered in establishing effective communication with media.

Web DOI [BibTex]

Web DOI [BibTex]


no image
Model Learning with Local Gaussian Process Regression

Nguyen-Tuong, D., Seeger, M., Peters, J.

Advanced Robotics, 23(15):2015-2034, November 2009 (article)

Abstract
Precise models of robot inverse dynamics allow the design of significantly more accurate, energy-efficient and compliant robot control. However, in some cases the accuracy of rigid-body models does not suffice for sound control performance due to unmodeled nonlinearities arising from hydraulic cable dynamics, complex friction or actuator dynamics. In such cases, estimating the inverse dynamics model from measured data poses an interesting alternative. Nonparametric regression methods, such as Gaussian process regression (GPR) or locally weighted projection regression (LWPR), are not as restrictive as parametric models and, thus, offer a more flexible framework for approximating unknown nonlinearities. In this paper, we propose a local approximation to the standard GPR, called local GPR (LGP), for real-time model online learning by combining the strengths of both regression methods, i.e., the high accuracy of GPR and the fast speed of LWPR. The approach is shown to have competitive learning performance for hig h-dimensional data while being sufficiently fast for real-time learning. The effectiveness of LGP is exhibited by a comparison with the state-of-the-art regression techniques, such as GPR, LWPR and ν-support vector regression. The applicability of the proposed LGP method is demonstrated by real-time online learning of the inverse dynamics model for robot model-based control on a Barrett WAM robot arm.

PDF Web DOI [BibTex]


no image
An Incremental GEM Framework for Multiframe Blind Deconvolution, Super-Resolution, and Saturation Correction

Harmeling, S., Sra, S., Hirsch, M., Schölkopf, B.

(187), Max Planck Institute for Biological Cybernetics, Tübingen, Germany, November 2009 (techreport)

Abstract
We develop an incremental generalized expectation maximization (GEM) framework to model the multiframe blind deconvolution problem. A simplistic version of this problem was recently studied by Harmeling etal~cite{harmeling09}. We solve a more realistic version of this problem which includes the following major features: (i) super-resolution ability emph{despite} noise and unknown blurring; (ii) saturation-correction, i.e., handling of overexposed pixels that can otherwise confound the image processing; and (iii) simultaneous handling of color channels. These features are seamlessly integrated into our incremental GEM framework to yield simple but efficient multiframe blind deconvolution algorithms. We present technical details concerning critical steps of our algorithms, especially to highlight how all operations can be written using matrix-vector multiplications. We apply our algorithm to real-world images from astronomy and super resolution tasks. Our experimental results show that our methods yield improve d resolution and deconvolution at the same time.

PDF [BibTex]

PDF [BibTex]


no image
Efficient Filter Flow for Space-Variant Multiframe Blind Deconvolution

Hirsch, M., Sra, S., Schölkopf, B., Harmeling, S.

(188), Max Planck Institute for Biological Cybernetics, Tübingen, Germany, November 2009 (techreport)

Abstract
Ultimately being motivated by facilitating space-variant blind deconvolution, we present a class of linear transformations, that are expressive enough for space-variant filters, but at the same time especially designed for efficient matrix-vector-multiplications. Successful results on astronomical imaging through atmospheric turbulences and on noisy magnetic resonance images of constantly moving objects demonstrate the practical significance of our approach.

PDF [BibTex]

PDF [BibTex]


no image
Methods for feature selection in a learning machine

Weston, J., Elisseeff, A., Schölkopf, B., Pérez-Cruz, F.

United States Patent, No 7624074, November 2009 (patent)

[BibTex]

[BibTex]


no image
Event-Related Potentials in Brain-Computer Interfacing

Hill, NJ.

Invited lecture on the bachelor & masters course "Introduction to Brain-Computer Interfacing", October 2009 (talk)

Abstract
An introduction to event-related potentials with specific reference to their use in brain-computer interfacing applications and research.

PDF [BibTex]

PDF [BibTex]


no image
BCI2000 and Python

Hill, NJ.

Invited lecture at the 5th International BCI2000 Workshop, October 2009 (talk)

Abstract
A tutorial, with exercises, on how to integrate your own Python code with the BCI2000 software package.

PDF [BibTex]

PDF [BibTex]


no image
Implementing a Signal Processing Filter in BCI2000 Using C++

Hill, NJ., Mellinger, J.

Invited lecture at the 5th International BCI2000 Workshop, October 2009 (talk)

Abstract
This tutorial shows how the functionality of the BCI2000 software package can be extended with one‘s own code, using BCI2000‘s C++ API.

PDF [BibTex]

PDF [BibTex]


no image
Detecting Objects in Large Image Collections and Videos by Efficient Subimage Retrieval

Lampert, CH.

In ICCV 2009, pages: 987-994, IEEE Computer Society, Piscataway, NJ, USA, Twelfth IEEE International Conference on Computer Vision, October 2009 (inproceedings)

Abstract
We study the task of detecting the occurrence of objects in large image collections or in videos, a problem that combines aspects of content based image retrieval and object localization. While most previous approaches are either limited to special kinds of queries, or do not scale to large image sets, we propose a new method, efficient subimage retrieval (ESR), which is at the same time very flexible and very efficient. Relying on a two-layered branch-and-bound setup, ESR performs object-based image retrieval in sets of 100,000 or more images within seconds. An extensive evaluation on several datasets shows that ESR is not only very fast, but it also achieves detection accuracies that are on par with or superior to previously published methods for object-based image retrieval.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Algebraic polynomials and moments of stochastic integrals

Langovoy, M.

(2009-031), EURANDOM, Technische Universiteit Eindhoven, October 2009 (techreport)

PDF [BibTex]

PDF [BibTex]


no image
Toward a Theory of Consciousness

Tononi, G., Balduzzi, D.

In The Cognitive Neurosciences, pages: 1201-1220, (Editors: Gazzaniga, M.S.), MIT Press, Cambridge, MA, USA, October 2009 (inbook)

Web [BibTex]

Web [BibTex]


no image
Inferring textual entailment with a probabilistically sound calculus

Harmeling, S.

Natural Language Engineering, 15(4):459-477, October 2009 (article)

Abstract
We introduce a system for textual entailment that is based on a probabilistic model of entailment. The model is defined using a calculus of transformations on dependency trees, which is characterized by the fact that derivations in that calculus preserve the truth only with a certain probability. The calculus is successfully evaluated on the datasets of the PASCAL Challenge on Recognizing Textual Entailment.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Modeling and Visualizing Uncertainty in Gene Expression Clusters using Dirichlet Process Mixtures

Rasmussen, CE., de la Cruz, BJ., Ghahramani, Z., Wild, DL.

IEEE/ACM Transactions on Computational Biology and Bioinformatics, 6(4):615-628, October 2009 (article)

Abstract
Although the use of clustering methods has rapidly become one of the standard computational approaches in the literature of microarray gene expression data, little attention has been paid to uncertainty in the results obtained. Dirichlet process mixture models provide a non-parametric Bayesian alternative to the bootstrap approach to modeling uncertainty in gene expression clustering. Most previously published applications of Bayesian model based clustering methods have been to short time series data. In this paper we present a case study of the application of non-parametric Bayesian clustering methods to the clustering of high-dimensional non-time series gene expression data using full Gaussian covariances. We use the probability that two genes belong to the same cluster in a Dirichlet process mixture model as a measure of the similarity of these gene expression profiles. Conversely, this probability can be used to define a dissimilarity measure, which, for the purposes of visualization, can be input to one of the standard linkage algorithms used for hierarchical clustering. Biologically plausible results are obtained from the Rosetta compendium of expression profiles which extend previously published cluster analyses of this data.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
A new non-monotonic algorithm for PET image reconstruction

Sra, S., Kim, D., Dhillon, I., Schölkopf, B.

In IEEE - Nuclear Science Symposium Conference Record (NSS/MIC), 2009, pages: 2500-2502, (Editors: B Yu), IEEE, Piscataway, NJ, USA, IEEE Nuclear Science Symposium and Medical Imaging Conference, October 2009 (inproceedings)

Abstract
Maximizing some form of Poisson likelihood (either with or without penalization) is central to image reconstruction algorithms in emission tomography. In this paper we introduce NMML, a non-monotonic algorithm for maximum likelihood PET image reconstruction. NMML offers a simple and flexible procedure that also easily incorporates standard convex regular-ization for doing penalized likelihood estimation. A vast number image reconstruction algorithms have been developed for PET, and new ones continue to be designed. Among these, methods based on the expectation maximization (EM) and ordered-subsets (OS) framework seem to have enjoyed the greatest popularity. Our method NMML differs fundamentally from methods based on EM: i) it does not depend on the concept of optimization transfer (or surrogate functions); and ii) it is a rapidly converging nonmonotonic descent procedure. The greatest strengths of NMML, however, are its simplicity, efficiency, and scalability, which make it especially attractive for tomograph ic reconstruction. We provide a theoretical analysis NMML, and empirically observe it to outperform standard EM based methods, sometimes by orders of magnitude. NMML seamlessly allows integreation of penalties (regularizers) in the likelihood. This ability can prove to be crucial, especially because with the rapidly rising importance of combined PET/MR scanners, one will want to include more “prior” knowledge into the reconstruction.

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Approximation Algorithms for Tensor Clustering

Jegelka, S., Sra, S., Banerjee, A.

In Algorithmic Learning Theory: 20th International Conference, pages: 368-383, (Editors: Gavalda, R. , G. Lugosi, T. Zeugmann, S. Zilles), Springer, Berlin, Germany, ALT, October 2009 (inproceedings)

Abstract
We present the first (to our knowledge) approximation algo- rithm for tensor clustering—a powerful generalization to basic 1D clustering. Tensors are increasingly common in modern applications dealing with complex heterogeneous data and clustering them is a fundamental tool for data analysis and pattern discovery. Akin to their 1D cousins, common tensor clustering formulations are NP-hard to optimize. But, unlike the 1D case no approximation algorithms seem to be known. We address this imbalance and build on recent co-clustering work to derive a tensor clustering algorithm with approximation guarantees, allowing metrics and divergences (e.g., Bregman) as objective functions. Therewith, we answer two open questions by Anagnostopoulos et al. (2008). Our analysis yields a constant approximation factor independent of data size; a worst-case example shows this factor to be tight for Euclidean co-clustering. However, empirically the approximation factor is observed to be conservative, so our method can also be used in practice.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Active learning using mean shift optimization for robot grasping

Kroemer, O., Detry, R., Piater, J., Peters, J.

In Proceedings of the 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2009), pages: 2610-2615, IEEE Service Center, Piscataway, NJ, USA, 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), October 2009 (inproceedings)

Abstract
When children learn to grasp a new object, they often know several possible grasping points from observing a parent‘s demonstration and subsequently learn better grasps by trial and error. From a machine learning point of view, this process is an active learning approach. In this paper, we present a new robot learning framework for reproducing this ability in robot grasping. For doing so, we chose a straightforward approach: first, the robot observes a few good grasps by demonstration and learns a value function for these grasps using Gaussian process regression. Subsequently, it chooses grasps which are optimal with respect to this value function using a mean-shift optimization approach, and tries them out on the real system. Upon every completed trial, the value function is updated, and in the following trials it is more likely to choose even better grasping points. This method exhibits fast learning due to the data-efficiency of Gaussian process regression framework and the fact th at t he mean-shift method provides maxima of this cost function. Experiments were repeatedly carried out successfully on a real robot system. After less than sixty trials, our system has adapted its grasping policy to consistently exhibit successful grasps.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Sparse online model learning for robot control with support vector regression

Nguyen-Tuong, D., Schölkopf, B., Peters, J.

In Proceedings of the 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2009), pages: 3121-3126, IEEE Service Center, Piscataway, NJ, USA, 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), October 2009 (inproceedings)

Abstract
The increasing complexity of modern robots makes it prohibitively hard to accurately model such systems as required by many applications. In such cases, machine learning methods offer a promising alternative for approximating such models using measured data. To date, high computational demands have largely restricted machine learning techniques to mostly offline applications. However, making the robots adaptive to changes in the dynamics and to cope with unexplored areas of the state space requires online learning. In this paper, we propose an approximation of the support vector regression (SVR) by sparsification based on the linear independency of training data. As a result, we obtain a method which is applicable in real-time online learning. It exhibits competitive learning accuracy when compared with standard regression techniques, such as nu-SVR, Gaussian process regression (GPR) and locally weighted projection regression (LWPR).

Web DOI [BibTex]

Web DOI [BibTex]


no image
Clinical PET/MRI-System and Its Applications with MRI Based Attenuation Correction

Kolb, A., Hofmann, M., Sossi, V., Wehrl, H., Sauter, A., Schmid, A., Schlemmer, H., Claussen, C., Pichler, B.

IEEE Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC 2009), 2009, pages: 1, October 2009 (poster)

Abstract
Clinical PET/MRI is an emerging new hybrid imaging modality. In addition to provide an unique possibility for multifunctional imaging with temporally and spatially matched data, it also provides anatomical information that can also be used for attenuation correction with no radiation exposure to the subjects. A plus of combined compared to sequential PET and MR imaging is the reduction of total scan time. Here we present our initial experience with a hybrid brain PET/MRI system. Due to the ethical approval patient scans could only be performed after a diagnostic PET/CT. We estimate that in approximately 50% of the cases PET/MRI was of superior diagnostic value compared to PET/CT and was able to provide additional information, such as DTI, spectroscopy and Time Of Flight (TOF) angiography. Here we present 3 patient cases in oncology, a retropharyngeal carcinoma in neurooncology, a relapsing meningioma and in neurology a pharyngeal carcinoma in addition to an infraction of the right hemisphere. For quantitative PET imaging attenuation correction is obligatory. In current PET/MRI setup we used our MRI based atlas method for calculating the mu-map for attenuation correction. MR-based attenuation correction accuracy was quantitatively compared to CT-based PET attenuation correction. Extensive studies to assess potential mutual interferences between PET and MR imaging modalities as well as NEMA measurements have been performed. The first patient studies as well as the phantom tests clearly demonstrated the overall good imaging performance of this first human PET/MRI system. Ongoing work concentrates on advanced normalization and reconstruction methods incorporating count-rate based algorithms.

Web [BibTex]

Web [BibTex]