Header logo is ei


2011


no image
Greedy Learning of Binary Latent Trees

Harmeling, S., Williams, C.

IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(6):1087-1097, June 2011 (article)

Abstract
Inferring latent structures from observations helps to model and possibly also understand underlying data generating processes. A rich class of latent structures are hierarchical latent class (HLC) models. Zhang (2004) proposed a search algorithm for learning such models that can find good solutions but is often computationally expensive. As an alternative we investigate two greedy procedures: the BIN-G algorithm determines both the structure of the tree and the cardinality of the latent variables in a bottom-up fashion. The BIN-A algorithm first determines the tree structure using agglomerative hierarchical clustering, and then determines the cardinality of the latent variables as for BIN-G. We show that even with restricting ourselves to binary trees we obtain HLC models of comparable quality to Zhang‘s solutions, while being faster to compute. This claim is validated by a comprehensive comparison on several datasets. Furthermore, we demonstrate that our methods are able to estimate int erpretable latent structures on real-world data with a large number of variables. By applying our method to a restricted version of the 20 newsgroups data these models turn out to be related to topic models, and on data from the PASCAL Visual Object Classes (VOC) 2007 challenge we show how such tree-structured models help us understand how objects co-occur in images.

PDF Web DOI [BibTex]

2011

PDF Web DOI [BibTex]


no image
Learning Dynamic Tactile Sensing with Robust Vision-based Training

Kroemer, O., Lampert, C., Peters, J.

IEEE Transactions on Robotics, 27(3):545-557 , June 2011 (article)

Abstract
Dynamic tactile sensing is a fundamental ability to recognize materials and objects. However, while humans are born with partially developed dynamic tactile sensing and quickly master this skill, today's robots remain in their infancy. The development of such a sense requires not only better sensors but the right algorithms to deal with these sensors' data as well. For example, when classifying a material based on touch, the data are noisy, high-dimensional, and contain irrelevant signals as well as essential ones. Few classification methods from machine learning can deal with such problems. In this paper, we propose an efficient approach to infer suitable lower dimensional representations of the tactile data. In order to classify materials based on only the sense of touch, these representations are autonomously discovered using visual information of the surfaces during training. However, accurately pairing vision and tactile samples in real-robot applications is a difficult problem. The proposed approach, therefore, works with weak pairings between the modalities. Experiments show that the resulting approach is very robust and yields significantly higher classification performance based on only dynamic tactile sensing.

Web DOI [BibTex]

Web DOI [BibTex]


no image
Algebraic polynomials and moments of stochastic integrals

Langovoy, M.

Statistics & Probability Letters, 81(6):627-631, June 2011 (article)

Abstract
We propose an algebraic method for proving estimates on moments of stochastic integrals. The method uses qualitative properties of roots of algebraic polynomials from certain general classes. As an application, we give a new proof of a variation of the Burkholder–Davis–Gundy inequality for the case of stochastic integrals with respect to real locally square integrable martingales. Further possible applications and extensions of the method are outlined.

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Inference for psychometric functions in the presence of nonstationary behavior

Fründ, I., Haenel, N., Wichmann, F.

Journal of Vision, 11(6):1-19, May 2011 (article)

Abstract
Measuring sensitivity is at the heart of psychophysics. Often, sensitivity is derived from estimates of the psychometric function. This function relates response probability to stimulus intensity. In estimating these response probabilities, most studies assume stationary observers: Responses are expected to be dependent only on the intensity of a presented stimulus and not on other factors such as stimulus sequence, duration of the experiment, or the responses on previous trials. Unfortunately, a number of factors such as learning, fatigue, or fluctuations in attention and motivation will typically result in violations of this assumption. The severity of these violations is yet unknown. We use Monte Carlo simulations to show that violations of these assumptions can result in underestimation of confidence intervals for parameters of the psychometric function. Even worse, collecting more trials does not eliminate this misestimation of confidence intervals. We present a simple adjustment of the confidence intervals that corrects for the underestimation almost independently of the number of trials and the particular type of violation.

Web DOI [BibTex]

Web DOI [BibTex]


no image
Transition from the locked in to the completely locked-in state: A physiological analysis

Ramos Murguialday, A., Hill, J., Bensch, M., Martens, S., Halder, S., Nijboer, F., Schölkopf, B., Birbaumer, N., Gharabaghi, A.

Clinical Neurophysiology, 122(5):925-933 , May 2011 (article)

Abstract
Objective To clarify the physiological and behavioral boundaries between locked-in (LIS) and the completely locked-in state (CLIS) (no voluntary eye movements, no communication possible) through electrophysiological data and to secure brain–computer-interface (BCI) communication. Methods Electromyography from facial muscles, external anal sphincter (EAS), electrooculography and electrocorticographic data during different psychophysiological tests were acquired to define electrophysiological differences in an amyotrophic lateral sclerosis (ALS) patient with an intracranially implanted grid of 112 electrodes for nine months while the patient passed from the LIS to the CLIS. Results At the very end of the LIS there was no facial muscle activity, nor external anal sphincter but eye control. Eye movements were slow and lasted for short periods only. During CLIS event related brain potentials (ERP) to passive limb movements and auditory stimuli were recorded, vibrotactile stimulation of different body parts resulted in no ERP response. Conclusions The results presented contradict the commonly accepted assumption that the EAS is the last remaining muscle under voluntary control and demonstrate complete loss of eye movements in CLIS. The eye muscle was shown to be the last muscle group under voluntary control. The findings suggest ALS as a multisystem disorder, even affecting afferent sensory pathways. Significance Auditory and proprioceptive brain–computer-interface (BCI) systems are the only remaining communication channels in CLIS.

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Incremental online sparsification for model learning in real-time robot control

Nguyen-Tuong, D., Peters, J.

Neurocomputing, 74(11):1859-1867, May 2011 (article)

Abstract
For many applications such as compliant, accurate robot tracking control, dynamics models learned from data can help to achieve both compliant control performance as well as high tracking quality. Online learning of these dynamics models allows the robot controller to adapt itself to changes in the dynamics (e.g., due to time-variant nonlinearities or unforeseen loads). However, online learning in real-time applications -- as required in control -- cannot be realized by straightforward usage of off-the-shelf machine learning methods such as Gaussian process regression or support vector regression. In this paper, we propose a framework for online, incremental sparsification with a fixed budget designed for fast real-time model learning. The proposed approach employs a sparsification method based on an independence measure. In combination with an incremental learning approach such as incremental Gaussian process regression, we obtain a model approximation method which is applicable in real-time online learning. It exhibits competitive learning accuracy when compared with standard regression techniques. Implementation on a real Barrett WAM robot demonstrates the applicability of the approach in real-time online model learning for real world systems.

PDF DOI [BibTex]


no image
Causal Influence of Gamma Oscillations on the Sensorimotor Rhythm

Grosse-Wentrup, M., Schölkopf, B., Hill, J.

NeuroImage, 56(2):837-842, May 2011 (article)

Abstract
Gamma oscillations of the electromagnetic field of the brain are known to be involved in a variety of cognitive processes, and are believed to be fundamental for information processing within the brain. While gamma oscillations have been shown to be correlated with brain rhythms at different frequencies, to date no empirical evidence has been presented that supports a causal influence of gamma oscillations on other brain rhythms. In this work, we study the relation of gamma oscillations and the sensorimotor rhythm (SMR) in healthy human subjects using electroencephalography. We first demonstrate that modulation of the SMR, induced by motor imagery of either the left or right hand, is positively correlated with the power of frontal and occipital gamma oscillations, and negatively correlated with the power of centro-parietal gamma oscillations. We then demonstrate that the most simple causal structure, capable of explaining the observed correlation of gamma oscillations and the SMR, entails a causal influence of gamma oscillations on the SMR. This finding supports the fundamental role attributed to gamma oscillations for information processing within the brain, and is of particular importance for brain–computer interfaces (BCIs). As modulation of the SMR is typically used in BCIs to infer a subject's intention, our findings entail that gamma oscillations have a causal influence on a subject's capability to utilize a BCI for means of communication.

PDF DOI [BibTex]


no image
The effect of patient positioning aids on PET quantification in PET/MR imaging

Mantlik, F., Hofmann, M., Werner, M., Sauter, A., Kupferschläger, J., Schölkopf, B., Pichler, B., Beyer, T.

European Journal of Nuclear Medicine and Molecular Imaging, 38(5):920-929, May 2011 (article)

Abstract
Objectives Clinical PET/MR requires the use of patient positioning aids to immobilize and support patients for the duration of the combined examination. Ancillary immobilization devices contribute to overall attenuation of the PET signal, but are not detected with conventional MR sequences and, hence, are ignored in standard MR-based attenuation correction (MR-AC). We report on the quantitative effect of not accounting for the attenuation of patient positioning aids in combined PET/MR imaging. Methods We used phantom and patient data acquired with positioning aids on a PET/CT scanner (Biograph 16, HI-REZ) to mimic PET/MR imaging conditions. Reference CT-based attenuation maps were generated from measured (original) CT transmission images (origCT-AC). We also created MR-like attenuation maps by following the same conversion procedure of the attenuation values except for the prior delineation and subtraction of the positioning aids from the CT images (modCT-AC). First, a uniform 68Ge cylinder was positioned centrally in the PET/CT scanner and fixed with a vacuum mattress (10 cm thick) and, in a repeat examination, with MR positioning foam pads. Second, 16 patient datasets were selected for subsequent processing. All patients were regionally immobilized with positioning aids: a vacuum mattress for head/neck imaging (nine patients) and a foam mattress for imaging of the lower extremities (seven patients). PET images were reconstructed following CT-based attenuation and scatter correction using the original and modified (MR-like) CT images: PETorigCT-AC and PETmodCT-AC, respectively. PET images following origCT-AC and modCT-AC were compared visually and in terms of mean differences of voxels with a standardized uptake value of at least 1.0. In addition, we report maximum activity concentration in lesions for selected patients. Results In the phantom study employing the vacuum mattress the average voxel activity in PETmodCT-AC was underestimated by 6.4% compared to PETorigCT-AC, with 3.4% of the PET voxels being underestimated by 10% or more. When the MR foam pads were not accounted for during AC, PETmodCT-AC was underestimated by 1.1% on average, with none of the PET voxels being underestimated by 10% or more. Evaluation of the head/neck patient data showed a decrease of 8.4% ([68Ga]DOTATOC) and 7.4% ([18F]FDG) when patient positioning aids were not accounted for during AC, while the corresponding decrease was insignificant for the lower extremities. Conclusion Depending on the size and density of the positioning aids used, a regionally variable underestimation of PET activity following AC is observed when positioning aids are not accounted for. This underestimation may become relevant in combined PET/MR imaging of patients with neuropsychiatric indications, but appears to be of no clinical relevance in imaging the extremities.

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Improving quantification of functional networks with EEG inverse problem: Evidence from a decoding point of view

Besserve, M., Martinerie, J., Garnero, L.

NeuroImage, 55(4):1536-1547, April 2011 (article)

Abstract
Decoding experimental conditions from single trial Electroencephalographic (EEG) signals is becoming a major challenge for the study of brain function and real-time applications such as Brain Computer Interface. EEG source reconstruction offers principled ways to estimate the cortical activities from EEG signals. But to what extent it can enhance informative brain signals in single trial has not been addressed in a general setting. We tested this using the minimum norm estimate solution (MNE) to estimate spectral power and coherence features at the cortical level. With a fast implementation, we computed a support vector machine (SVM) classifier output from these quantities in real-time, without prior on the relevant functional networks. We applied this approach to single trial decoding of ongoing mental imagery tasks using EEG data recorded in 5 subjects. Our results show that reconstructing the underlying cortical network dynamics significantly outperforms a usual electrode level approach in terms of information transfer and also reduces redundancy between coherence and power features, supporting a decrease of volume conduction effects. Additionally, the classifier coefficients reflect the most informative features of network activity, showing an important contribution of localized motor and sensory brain areas, and of coherence between areas up to 6 cm distance. This study provides a computationally efficient and interpretable strategy to extract information from functional networks at the cortical level in single trial. Moreover, this sets a general framework to evaluate the performance of EEG source reconstruction methods by their decoding abilities.

Web DOI [BibTex]


no image
Using brain–computer interfaces to induce neural plasticity and restore function

Grosse-Wentrup, M., Mattia, D., Oweiss, K.

Journal of Neural Engineering, 8(2):1-5, April 2011 (article)

Abstract
Analyzing neural signals and providing feedback in real-time is one of the core characteristics of a brain-computer interface (BCI). As this feature may be employed to induce neural plasticity, utilizing BCI-technology for therapeutic purposes is increasingly gaining popularity in the BCI-community. In this review, we discuss the state-of-the-art of research on this topic, address the principles of and challenges in inducing neural plasticity by means of a BCI, and delineate the problems of study design and outcome evaluation arising in this context. The review concludes with a list of open questions and recommendations for future research in this field.

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
EPIBLASTER-fast exhaustive two-locus epistasis detection strategy using graphical processing units

Kam-Thong, T., Czamara, D., Tsuda, K., Borgwardt, K., Lewis, C., Erhardt-Lehmann, A., Hemmer, B., Rieckmann, P., Daake, M., Weber, F., Wolf, C., Ziegler, A., Pütz, B., Holsboer, F., Schölkopf, B., Müller-Myhsok, B.

European Journal of Human Genetics, 19(4):465-471, April 2011 (article)

Abstract
Detection of epistatic interaction between loci has been postulated to provide a more in-depth understanding of the complex biological and biochemical pathways underlying human diseases. Studying the interaction between two loci is the natural progression following traditional and well-established single locus analysis. However, the added costs and time duration required for the computation involved have thus far deterred researchers from pursuing a genome-wide analysis of epistasis. In this paper, we propose a method allowing such analysis to be conducted very rapidly. The method, dubbed EPIBLASTER, is applicable to case–control studies and consists of a two-step process in which the difference in Pearson‘s correlation coefficients is computed between controls and cases across all possible SNP pairs as an indication of significant interaction warranting further analysis. For the subset of interactions deemed potentially significant, a second-stage analysis is performed using the likelihood ratio test from the logistic regression to obtain the P-value for the estimated coefficients of the individual effects and the interaction term. The algorithm is implemented using the parallel computational capability of commercially available graphical processing units to greatly reduce the computation time involved. In the current setup and example data sets (211 cases, 222 controls, 299468 SNPs; and 601 cases, 825 controls, 291095 SNPs), this coefficient evaluation stage can be completed in roughly 1 day. Our method allows for exhaustive and rapid detection of significant SNP pair interactions without imposing significant marginal effects of the single loci involved in the pair.

PDF PDF DOI [BibTex]

PDF PDF DOI [BibTex]


no image
Model learning for robot control: a survey

Nguyen-Tuong, D., Peters, J.

Cognitive Processing, 12(4):319-340, April 2011 (article)

Abstract
Models are among the most essential tools in robotics, such as kinematics and dynamics models of the robot’s own body and controllable external objects. It is widely believed that intelligent mammals also rely on internal models in order to generate their actions. However, while classical robotics relies on manually generated models that are based on human insights into physics, future autonomous, cognitive robots need to be able to automatically generate models that are based on information which is extracted from the data streams accessible to the robot. In this paper, we survey the progress in model learning with a strong focus on robot control on a kinematic as well as dynamical level. Here, a model describes essential information about the behavior of the environment and the influence of an agent on this environment. In the context of model-based learning control, we view the model from three different perspectives. First, we need to study the different possible model learning architectures for robotics. Second, we discuss what kind of problems these architecture and the domain of robotics imply for the applicable learning methods. From this discussion, we deduce future directions of real-time learning algorithms. Third, we show where these scenarios have been used successfully in several case studies.

PDF PDF DOI [BibTex]

PDF PDF DOI [BibTex]


no image
Critical issues in state-of-the-art brain–computer interface signal processing

Krusienski, D., Grosse-Wentrup, M., Galan, F., Coyle, D., Miller, K., Forney, E., Anderson, C.

Journal of Neural Engineering, 8(2):1-8, April 2011 (article)

Abstract
This paper reviews several critical issues facing signal processing for brain–computer interfaces (BCIs) and suggests several recent approaches that should be further examined. The topics were selected based on discussions held during the 4th International BCI Meeting at a workshop organized to review and evaluate the current state of, and issues relevant to, feature extraction and translation of field potentials for BCIs. The topics presented in this paper include the relationship between electroencephalography and electrocorticography, novel features for performance prediction, time-embedded signal representations, phase information, signal non-stationarity, and unsupervised adaptation.

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
A Blind Deconvolution Approach for Improving the Resolution of Cryo-EM Density Maps

Hirsch, M., Schölkopf, B., Habeck, M.

Journal of Computational Biology, 18(3):335-346, March 2011 (article)

Abstract
Cryo-electron microscopy (cryo-EM) plays an increasingly prominent role in structure elucidation of macromolecular assemblies. Advances in experimental instrumentation and computational power have spawned numerous cryo-EM studies of large biomolecular complexes resulting in the reconstruction of three-dimensional density maps at intermediate and low resolution. In this resolution range, identification and interpretation of structural elements and modeling of biomolecular structure with atomic detail becomes problematic. In this article, we present a novel algorithm that enhances the resolution of intermediate- and low-resolution density maps. Our underlying assumption is to model the low-resolution density map as a blurred and possibly noise-corrupted version of an unknown high-resolution map that we seek to recover by deconvolution. By exploiting the nonnegativity of both the high-resolution map and blur kernel, we derive multiplicative updates reminiscent of those used in nonnegative matrix factorization. Our framework allows for easy incorporation of additional prior knowledge such as smoothness and sparseness, on both the sharpened density map and the blur kernel. A probabilistic formulation enables us to derive updates for the hyperparameters; therefore, our approach has no parameter that needs adjustment. We apply the algorithm to simulated three-dimensional electron microscopic data. We show that our method provides better resolved density maps when compared with B-factor sharpening, especially in the presence of noise. Moreover, our method can use additional information provided by homologous structures, which helps to improve the resolution even further.

Web DOI [BibTex]

Web DOI [BibTex]


no image
Dynamics of excitable neural networks with heterogeneous connectivity

Chavez, M., Besserve, M., Le Van Quyen, M.

Progress in Biophysics and Molecular Biology, 105(1-2):29-33, March 2011 (article)

Abstract
A central issue of neuroscience is to understand how neural units integrates internal and external signals to create coherent states. Recently, it has been shown that the sensitivity and dynamic range of neural assemblies are optimal at a critical coupling among its elements. Complex architectures of connections seem to play a constructive role on the reliable coordination of neural units. Here we show that, the synchronizability and sensitivity of excitable neural networks can be tuned by diversity in the connections strengths. We illustrate our findings for weighted networks with regular, random and complex topologies. Additional comparisons of real brain networks support previous studies suggesting that heterogeneity in the connectivity may play a constructive role on information processing. These findings provide insights into the relationship between structure and function of neural circuits.

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Combining computational modeling with sparse and low-resolution data

Habeck, M., Nilges, M.

Journal of Structural Biology, 173(3):419, March 2011 (article)

Abstract
Structural biology is moving into a new era by shifting its focus from static structures of single proteins and protein domains to large and often fragile multi-component complexes. Over the past decade, structural genomics initiatives aimed to fill the voids in fold space and to provide a census of all protein structures. Completion of such an atlas of protein structures is still ongoing, but not sufficient for a mechanistic understanding of how living cells function. One of the great challenges is to bridge the gap between atomic resolution detail and the more fuzzy description of the molecular complexes that govern cellular processes or host–pathogen interactions. We want to move from cartoon-like representations of multi-component complexes to atomic resolution structures. To characterize the structures of the increasingly large and often flexible complexes, high resolution structure determination (as was possible for example for the ribosome) will likely stay the exception. Rather, data from many different methods providing information on the shape (X-ray crystallography, electron microscopy, SAXS, AFM, etc.) or on contacts between components (mass spectrometry, co-purification, or spectroscopic methods) need to be integrated with prior structural knowledge to build a consistent model of the complex. A particular difficulty is that the ratio between the number of conformational degrees of freedom and the number of measurements becomes unfavorable as we work with large complexes: data become increasingly sparse. Structural characterization of large molecular assemblies often involves a loss in resolution as well as in number and quality of data. We are good at solving structures of single proteins, but classical high-resolution structure determination by X-ray crystallography and NMR spectroscopy is often facing its limits as we move to higher molecular mass and increased flexibility. Therefore, structural studies on large complexes rely on new experimental techniques that complement the classical high resolution methods. But also computational approaches are becoming more important when it comes to integrating and analyzing structural information of often heterogeneous nature. Cryoelectron microscopy may serve as an example of how experimental methods can benefit from computation. Low-resolution data from cryo-EM show their true power when combined with modeling and bioinformatics methods such rigid docking and secondary structure hunting. Even in high resolution structure determination, molecular modeling is always necessary to calculate structures from data, to complement the missing information and to evaluate and score the obtained structures. With sparse data, all these three aspects become increasingly difficult, and the quality of the modeling approach becomes more important. With data alone, algorithms may not converge any more; scoring against data becomes meaningless; and the potential energy function becomes central not only as a help in making algorithms converge but also to score and evaluate the structures. In addition to the sparsity of the data, hybrid approaches bring the additional difficulty that the different sources of data may have rather different quality, and may be in the extreme case incompatible with each other. In addition to scoring the structures, modeling should also score in some way the data going into the calculation. This special issue brings together some of the numerous efforts to solve the problems that come from sparsity of data and from integrating data from different sources in hybrid approaches. The methods range from predominantly force-field based to mostly data based. Systems of very different sizes, ranging from single domains to multi-component complexes, are treated. We hope that you will enjoy reading the issue and find it a useful and inspiring resource.

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Batch-Mode Active-Learning Methods for the Interactive Classification of Remote Sensing Images

Demir, B., Persello, C., Bruzzone, L.

IEEE Transactions on Geoscience and Remote Sensing, 49(3):1014-1031, March 2011 (article)

Abstract
This paper investigates different batch-mode active-learning (AL) techniques for the classification of remote sensing (RS) images with support vector machines. This is done by generalizing to multiclass problem techniques defined for binary classifiers. The investigated techniques exploit different query functions, which are based on the evaluation of two criteria: uncertainty and diversity. The uncertainty criterion is associated to the confidence of the supervised algorithm in correctly classifying the considered sample, while the diversity criterion aims at selecting a set of unlabeled samples that are as more diverse (distant one another) as possible, thus reducing the redundancy among the selected samples. The combination of the two criteria results in the selection of the potentially most informative set of samples at each iteration of the AL process. Moreover, we propose a novel query function that is based on a kernel-clustering technique for assessing the diversity of samples and a new strategy for selecting the most informative representative sample from each cluster. The investigated and proposed techniques are theoretically and experimentally compared with state-of-the-art methods adopted for RS applications. This is accomplished by considering very high resolution multispectral and hyperspectral images. By this comparison, we observed that the proposed method resulted in better accuracy with respect to other investigated and state-of-the art methods on both the considered data sets. Furthermore, we derived some guidelines on the design of AL systems for the classification of different types of RS images.

Web DOI [BibTex]

Web DOI [BibTex]


no image
Statistical mechanics analysis of sparse data

Habeck, M.

Journal of Structural Biology, 173(3):541-548, March 2011 (article)

Abstract
Inferential structure determination uses Bayesian theory to combine experimental data with prior structural knowledge into a posterior probability distribution over protein conformational space. The posterior distribution encodes everything one can say objectively about the native structure in the light of the available data and additional prior assumptions and can be searched for structural representatives. Here an analogy is drawn between the posterior distribution and the canonical ensemble of statistical physics. A statistical mechanics analysis assesses the complexity of a structure calculation globally in terms of ensemble properties. Analogs of the free energy and density of states are introduced; partition functions evaluate the consistency of prior assumptions with data. Critical behavior is observed with dwindling restraint density, which impairs structure determination with too sparse data. However, prior distributions with improved realism ameliorate the situation by lowering the critical number of observations. An in-depth analysis of various experimentally accessible structural parameters and force field terms will facilitate a statistical approach to protein structure determination with sparse data that avoids bias as much as possible.

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Large Scale Bayesian Inference and Experimental Design for Sparse Linear Models

Seeger, M., Nickisch, H.

SIAM Journal on Imaging Sciences, 4(1):166-199, March 2011 (article)

Abstract
Many problems of low-level computer vision and image processing, such as denoising, deconvolution, tomographic reconstruction or super-resolution, can be addressed by maximizing the posterior distribution of a sparse linear model (SLM). We show how higher-order Bayesian decision-making problems, such as optimizing image acquisition in magnetic resonance scanners, can be addressed by querying the SLM posterior covariance, unrelated to the density‘s mode. We propose a scalable algorithmic framework, with which SLM posteriors over full, high-resolution images can be approximated for the first time, solving a variational optimization problem which is convex iff posterior mode finding is convex. These methods successfully drive the optimization of sampling trajectories for real-world magnetic resonance imaging through Bayesian experimental design, which has not been attempted before. Our methodology provides new insight into similarities and differences between sparse reconstruction and approximate Bayesian inference, and has important implications for compressive sensing of real-world images.

Web DOI [BibTex]


no image
Learning grasp affordance densities

Detry, R., Kraft, D., Kroemer, O., Peters, J., Krüger, N., Piater, J.

Paladyn: Journal of Behavioral Robotics, 2(1):1-17, March 2011 (article)

Abstract
We address the issue of learning and representing object grasp affordance models. We model grasp affordances with continuous probability density functions (grasp densities) which link object-relative grasp poses to their success probability. The underlying function representation is nonparametric and relies on kernel density estimation to provide a continuous model. Grasp densities are learned and refined from exploration, by letting a robot “play” with an object in a sequence of grasp-and-drop actions: the robot uses visual cues to generate a set of grasp hypotheses, which it then executes and records their outcomes. When a satisfactory amount of grasp data is available, an importance-sampling algorithm turns it into a grasp density. We evaluate our method in a largely autonomous learning experiment, run on three objects with distinct shapes. The experiment shows how learning increases success rates. It also measures the success rate of grasps chosen to maximize the probability of success, given reaching constraints.

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Client–Server Multitask Learning From Distributed Datasets

Dinuzzo, F., Pillonetto, G., De Nicolao, G.

IEEE Transactions on Neural Networks, 22(2):290-303, February 2011 (article)

Abstract
A client-server architecture to simultaneously solve multiple learning tasks from distributed datasets is described. In such architecture, each client corresponds to an individual learning task and the associated dataset of examples. The goal of the architecture is to perform information fusion from multiple datasets while preserving privacy of individual data. The role of the server is to collect data in real time from the clients and codify the information in a common database. Such information can be used by all the clients to solve their individual learning task, so that each client can exploit the information content of all the datasets without actually having access to private data of others. The proposed algorithmic framework, based on regularization and kernel methods, uses a suitable class of “mixed effect” kernels. The methodology is illustrated through a simulated recommendation system, as well as an experiment involving pharmacological data coming from a multicentric clinical trial.

DOI [BibTex]

DOI [BibTex]


no image
Extraction of functional information from ongoing brain electrical activity: Extraction en temps-réel d’informations fonctionnelles à partir de l’activité électrique cérébrale

Besserve, M., Martinerie, J.

IRBM, 32(1):27-34, February 2011 (article)

Abstract
The modern analysis of multivariate electrical brain signals requires advanced statistical tools to automatically extract and quantify their information content. These tools include machine learning techniques and information theory. They are currently used both in basic neuroscience and challenging applications such as brain computer interfaces. We review here how these methods have been used at the Laboratoire d’Électroencéphalographie et de Neurophysiologie Appliquée (LENA) to develop a general tool for the real time analysis of functional brain signals. We then give some perspectives on how these tools can help understanding the biological mechanisms of information processing.

PDF DOI [BibTex]


no image
Learning Visual Representations for Perception-Action Systems

Piater, J., Jodogne, S., Detry, R., Kraft, D., Krüger, N., Kroemer, O., Peters, J.

International Journal of Robotics Research, 30(3):294-307, February 2011 (article)

Abstract
We discuss vision as a sensory modality for systems that interact flexibly with uncontrolled environments. Instead of trying to build a generic vision system that produces task-independent representations, we argue in favor of task-specific, learnable representations. This concept is illustrated by two examples of our own work. First, our RLVC algorithm performs reinforcement learning directly on the visual input space. To make this very large space manageable, RLVC interleaves the reinforcement learner with a supervised classification algorithm that seeks to split perceptual states so as to reduce perceptual aliasing. This results in an adaptive discretization of the perceptual space based on the presence or absence of visual features. Its extension, RLJC, additionally handles continuous action spaces. In contrast to the minimalistic visual representations produced by RLVC and RLJC, our second method learns structural object models for robust object detection and pose estimation by probabilistic inference. To these models, the method associates grasp experiences autonomously learned by trial and error. These experiences form a non-parametric representation of grasp success likelihoods over gripper poses, which we call a grasp density. Thus, object detection in a novel scene simultaneously produces suitable grasping options.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Multi-way set enumeration in weight tensors

Georgii, E., Tsuda, K., Schölkopf, B.

Machine Learning, 82(2):123-155, February 2011 (article)

Abstract
The analysis of n-ary relations receives attention in many different fields, for instance biology, web mining, and social studies. In the basic setting, there are n sets of instances, and each observation associates n instances, one from each set. A common approach to explore these n-way data is the search for n-set patterns, the n-way equivalent of itemsets. More precisely, an n-set pattern consists of specific subsets of the n instance sets such that all possible associations between the corresponding instances are observed in the data. In contrast, traditional itemset mining approaches consider only two-way data, namely items versus transactions. The n-set patterns provide a higher-level view of the data, revealing associative relationships between groups of instances. Here, we generalize this approach in two respects. First, we tolerate missing observations to a certain degree, that means we are also interested in n-sets where most (although not all) of the possible associations have been recorded in the data. Second, we take association weights into account. In fact, we propose a method to enumerate all n-sets that satisfy a minimum threshold with respect to the average association weight. Technically, we solve the enumeration task using a reverse search strategy, which allows for effective pruning of the search space. In addition, our algorithm provides a ranking of the solutions and can consider further constraints. We show experimental results on artificial and real-world datasets from different domains.

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
A graphical model framework for decoding in the visual ERP-based BCI speller

Martens, S., Mooij, J., Hill, N., Farquhar, J., Schölkopf, B.

Neural Computation, 23(1):160-182, January 2011 (article)

Abstract
We present a graphical model framework for decoding in the visual ERP-based speller system. The proposed framework allows researchers to build generative models from which the decoding rules are obtained in a straightforward manner. We suggest two models for generating brain signals conditioned on the stimulus events. Both models incorporate letter frequency information but assume different dependencies between brain signals and stimulus events. For both models, we derive decoding rules and perform a discriminative training. We show on real visual speller data how decoding performance improves by incorporating letter frequency information and using a more realistic graphical model for the dependencies between the brain signals and the stimulus events. Furthermore, we discuss how the standard approach to decoding can be seen as a special case of the graphical model framework. The letter also gives more insight into the discriminative approach for decoding in the visual speller system.

Web DOI [BibTex]

Web DOI [BibTex]


no image
Robust Control of Teleoperation Systems Interacting with Viscoelastic Soft Tissues

Cho, JH., Son, HI., Bhattacharjee, T., Lee, DG., Lee, DY.

IEEE Transactions on Control Systems Technology, January 2011 (article) In revision

[BibTex]

[BibTex]


no image
Effect of Control Parameters and Haptic Cues on Human Perception for Remote Operations

Son, HI., Bhattacharjee, T., Jung, H., Lee, DY.

Experimental Brain Research, January 2011 (article) Submitted

[BibTex]

[BibTex]


no image
Joint Genetic Analysis of Gene Expression Data with Inferred Cellular Phenotypes

Parts, L., Stegle, O., Winn, J., Durbin, R.

PLoS Genetics, 7(1):1-10, January 2011 (article)

Abstract
Even within a defined cell type, the expression level of a gene differs in individual samples. The effects of genotype, measured factors such as environmental conditions, and their interactions have been explored in recent studies. Methods have also been developed to identify unmeasured intermediate factors that coherently influence transcript levels of multiple genes. Here, we show how to bring these two approaches together and analyse genetic effects in the context of inferred determinants of gene expression. We use a sparse factor analysis model to infer hidden factors, which we treat as intermediate cellular phenotypes that in turn affect gene expression in a yeast dataset. We find that the inferred phenotypes are associated with locus genotypes and environmental conditions and can explain genetic associations to genes in trans. For the first time, we consider and find interactions between genotype and intermediate phenotypes inferred from gene expression levels, complementing and extending established results.

Web DOI [BibTex]

Web DOI [BibTex]


no image
Reinforcement Learning with Bounded Information Loss

Peters, J., Peters, J., Mülling, K., Altun, Y.

AIP Conference Proceedings, 1305(1):365-372, 2011 (article)

Abstract
Policy search is a successful approach to reinforcement learning. However, policy improvements often result in the loss of information. Hence, it has been marred by premature convergence and implausible solutions. As first suggested in the context of covariant or natural policy gradients, many of these problems may be addressed by constraining the information loss. In this paper, we continue this path of reasoning and suggest two reinforcement learning methods, i.e., a model‐based and a model free algorithm that bound the loss in relative entropy while maximizing their return. The resulting methods differ significantly from previous policy gradient approaches and yields an exact update step. It works well on typical reinforcement learning benchmark problems as well as novel evaluations in robotics. We also show a Bayesian bound motivation of this new approach [8].

Web DOI [BibTex]

2002


no image
Optimized Support Vector Machines for Nonstationary Signal Classification

Davy, M., Gretton, A., Doucet, A., Rayner, P.

IEEE Signal Processing Letters, 9(12):442-445, December 2002 (article)

Abstract
This letter describes an efficient method to perform nonstationary signal classification. A support vector machine (SVM) algorithm is introduced and its parameters optimised in a principled way. Simulations demonstrate that our low complexity method outperforms state-of-the-art nonstationary signal classification techniques.

PostScript Web DOI [BibTex]

2002

PostScript Web DOI [BibTex]


no image
A New Discriminative Kernel from Probabilistic Models

Tsuda, K., Kawanabe, M., Rätsch, G., Sonnenburg, S., Müller, K.

Neural Computation, 14(10):2397-2414, October 2002 (article)

PDF [BibTex]

PDF [BibTex]


no image
Functional Genomics of Osteoarthritis

Aigner, T., Bartnik, E., Zien, A., Zimmer, R.

Pharmacogenomics, 3(5):635-650, September 2002 (article)

Web [BibTex]

Web [BibTex]


no image
Constructing Boosting algorithms from SVMs: an application to one-class classification.

Rätsch, G., Mika, S., Schölkopf, B., Müller, K.

IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(9):1184-1199, September 2002 (article)

Abstract
We show via an equivalence of mathematical programs that a support vector (SV) algorithm can be translated into an equivalent boosting-like algorithm and vice versa. We exemplify this translation procedure for a new algorithm—one-class leveraging—starting from the one-class support vector machine (1-SVM). This is a first step toward unsupervised learning in a boosting framework. Building on so-called barrier methods known from the theory of constrained optimization, it returns a function, written as a convex combination of base hypotheses, that characterizes whether a given test point is likely to have been generated from the distribution underlying the training data. Simulations on one-class classification problems demonstrate the usefulness of our approach.

DOI [BibTex]

DOI [BibTex]


no image
Co-Clustering of Biological Networks and Gene Expression Data

Hanisch, D., Zien, A., Zimmer, R., Lengauer, T.

Bioinformatics, (Suppl 1):145S-154S, 18, July 2002 (article)

Abstract
Motivation: Large scale gene expression data are often analysed by clustering genes based on gene expression data alone, though a priori knowledge in the form of biological networks is available. The use of this additional information promises to improve exploratory analysis considerably. Results: We propose constructing a distance function which combines information from expression data and biological networks. Based on this function, we compute a joint clustering of genes and vertices of the network. This general approach is elaborated for metabolic networks. We define a graph distance function on such networks and combine it with a correlation-based distance function for gene expression measurements. A hierarchical clustering and an associated statistical measure is computed to arrive at a reasonable number of clusters. Our method is validated using expression data of the yeast diauxic shift. The resulting clusters are easily interpretable in terms of the biochemical network and the gene expression data and suggest that our method is able to automatically identify processes that are relevant under the measured conditions.

Web [BibTex]

Web [BibTex]


no image
Confidence measures for protein fold recognition

Sommer, I., Zien, A., von Ohsen, N., Zimmer, R., Lengauer, T.

Bioinformatics, 18(6):802-812, June 2002 (article)

[BibTex]

[BibTex]


no image
The contributions of color to recognition memory for natural scenes

Wichmann, F., Sharpe, L., Gegenfurtner, K.

Journal of Experimental Psychology: Learning, Memory and Cognition, 28(3):509-520, May 2002 (article)

Abstract
The authors used a recognition memory paradigm to assess the influence of color information on visual memory for images of natural scenes. Subjects performed 5-10% better for colored than for black-and-white images independent of exposure duration. Experiment 2 indicated little influence of contrast once the images were suprathreshold, and Experiment 3 revealed that performance worsened when images were presented in color and tested in black and white, or vice versa, leading to the conclusion that the surface property color is part of the memory representation. Experiments 4 and 5 exclude the possibility that the superior recognition memory for colored images results solely from attentional factors or saliency. Finally, the recognition memory advantage disappears for falsely colored images of natural scenes: The improvement in recognition memory depends on the color congruence of presented images with learned knowledge about the color gamut found within natural scenes. The results can be accounted for within a multiple memory systems framework.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Training invariant support vector machines

DeCoste, D., Schölkopf, B.

Machine Learning, 46(1-3):161-190, January 2002 (article)

Abstract
Practical experience has shown that in order to obtain the best possible performance, prior knowledge about invariances of a classification problem at hand ought to be incorporated into the training procedure. We describe and review all known methods for doing so in support vector machines, provide experimental results, and discuss their respective merits. One of the significant new results reported in this work is our recent achievement of the lowest reported test error on the well-known MNIST digit recognition benchmark task, with SVM training times that are also significantly faster than previous SVM methods.

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Model Selection for Small Sample Regression

Chapelle, O., Vapnik, V., Bengio, Y.

Machine Learning, 48(1-3):9-23, 2002 (article)

Abstract
Model selection is an important ingredient of many machine learning algorithms, in particular when the sample size in small, in order to strike the right trade-off between overfitting and underfitting. Previous classical results for linear regression are based on an asymptotic analysis. We present a new penalization method for performing model selection for regression that is appropriate even for small samples. Our penalization is based on an accurate estimator of the ratio of the expected training error and the expected generalization error, in terms of the expected eigenvalues of the input covariance matrix.

PostScript [BibTex]

PostScript [BibTex]


no image
Contrast discrimination with sinusoidal gratings of different spatial frequency

Bird, C., Henning, G., Wichmann, F.

Journal of the Optical Society of America A, 19(7), pages: 1267-1273, 2002 (article)

Abstract
The detectability of contrast increments was measured as a function of the contrast of a masking or “pedestal” grating at a number of different spatial frequencies ranging from 2 to 16 cycles per degree of visual angle. The pedestal grating always had the same orientation, spatial frequency and phase as the signal. The shape of the contrast increment threshold versus pedestal contrast (TvC) functions depend of the performance level used to define the “threshold,” but when both axes are normalized by the contrast corresponding to 75% correct detection at each frequency, the (TvC) functions at a given performance level are identical. Confidence intervals on the slope of the rising part of the TvC functions are so wide that it is not possible with our data to reject Weber’s Law.

PDF [BibTex]

PDF [BibTex]


no image
A Bennett Concentration Inequality and Its Application to Suprema of Empirical Processes

Bousquet, O.

C. R. Acad. Sci. Paris, Ser. I, 334, pages: 495-500, 2002 (article)

Abstract
We introduce new concentration inequalities for functions on product spaces. They allow to obtain a Bennett type deviation bound for suprema of empirical processes indexed by upper bounded functions. The result is an improvement on Rio's version \cite{Rio01b} of Talagrand's inequality \cite{Talagrand96} for equidistributed variables.

PDF PostScript [BibTex]


no image
Numerical evolution of axisymmetric, isolated systems in general relativity

Frauendiener, J., Hein, M.

Physical Review D, 66, pages: 124004-124004, 2002 (article)

Abstract
We describe in this article a new code for evolving axisymmetric isolated systems in general relativity. Such systems are described by asymptotically flat space-times, which have the property that they admit a conformal extension. We are working directly in the extended conformal manifold and solve numerically Friedrich's conformal field equations, which state that Einstein's equations hold in the physical space-time. Because of the compactness of the conformal space-time the entire space-time can be calculated on a finite numerical grid. We describe in detail the numerical scheme, especially the treatment of the axisymmetry and the boundary.

GZIP [BibTex]

GZIP [BibTex]


no image
Marginalized kernels for biological sequences

Tsuda, K., Kin, T., Asai, K.

Bioinformatics, 18(Suppl 1):268-275, 2002 (article)

PDF [BibTex]

PDF [BibTex]


no image
Stability and Generalization

Bousquet, O., Elisseeff, A.

Journal of Machine Learning Research, 2, pages: 499-526, 2002 (article)

Abstract
We define notions of stability for learning algorithms and show how to use these notions to derive generalization error bounds based on the empirical error and the leave-one-out error. The methods we use can be applied in the regression framework as well as in the classification one when the classifier is obtained by thresholding a real-valued function. We study the stability properties of large classes of learning algorithms such as regularization based algorithms. In particular we focus on Hilbert space regularization and Kullback-Leibler regularization. We demonstrate how to apply the results to SVM for regression and classification.

PDF PostScript [BibTex]

PDF PostScript [BibTex]


no image
Subspace information criterion for non-quadratic regularizers – model selection for sparse regressors

Tsuda, K., Sugiyama, M., Müller, K.

IEEE Trans Neural Networks, 13(1):70-80, 2002 (article)

PDF [BibTex]

PDF [BibTex]


no image
Modeling splicing sites with pairwise correlations

Arita, M., Tsuda, K., Asai, K.

Bioinformatics, 18(Suppl 2):27-34, 2002 (article)

PDF [BibTex]

PDF [BibTex]


no image
Perfusion Quantification using Gaussian Process Deconvolution

Andersen, IK., Szymkowiak, A., Rasmussen, CE., Hanson, LG., Marstrand, JR., Larsson, HBW., Hansen, LK.

Magnetic Resonance in Medicine, (48):351-361, 2002 (article)

Abstract
The quantification of perfusion using dynamic susceptibility contrast MR imaging requires deconvolution to obtain the residual impulse-response function (IRF). Here, a method using a Gaussian process for deconvolution, GPD, is proposed. The fact that the IRF is smooth is incorporated as a constraint in the method. The GPD method, which automatically estimates the noise level in each voxel, has the advantage that model parameters are optimized automatically. The GPD is compared to singular value decomposition (SVD) using a common threshold for the singular values and to SVD using a threshold optimized according to the noise level in each voxel. The comparison is carried out using artificial data as well as using data from healthy volunteers. It is shown that GPD is comparable to SVD variable optimized threshold when determining the maximum of the IRF, which is directly related to the perfusion. GPD provides a better estimate of the entire IRF. As the signal to noise ratio increases or the time resolution of the measurements increases, GPD is shown to be superior to SVD. This is also found for large distribution volumes.

PDF PostScript [BibTex]

PDF PostScript [BibTex]


no image
Tracking a Small Set of Experts by Mixing Past Posteriors

Bousquet, O., Warmuth, M.

Journal of Machine Learning Research, 3, pages: 363-396, (Editors: Long, P.), 2002 (article)

Abstract
In this paper, we examine on-line learning problems in which the target concept is allowed to change over time. In each trial a master algorithm receives predictions from a large set of n experts. Its goal is to predict almost as well as the best sequence of such experts chosen off-line by partitioning the training sequence into k+1 sections and then choosing the best expert for each section. We build on methods developed by Herbster and Warmuth and consider an open problem posed by Freund where the experts in the best partition are from a small pool of size m. Since k >> m, the best expert shifts back and forth between the experts of the small pool. We propose algorithms that solve this open problem by mixing the past posteriors maintained by the master algorithm. We relate the number of bits needed for encoding the best partition to the loss bounds of the algorithms. Instead of paying log n for choosing the best expert in each section we first pay log (n choose m) bits in the bounds for identifying the pool of m experts and then log m bits per new section. In the bounds we also pay twice for encoding the boundaries of the sections.

PDF PostScript [BibTex]

PDF PostScript [BibTex]


no image
A femoral arteriovenous shunt facilitates arterial whole blood sampling in animals

Weber, B., Burger, C., Biro, P., Buck, A.

Eur J Nucl Med Mol Imaging, 29, pages: 319-323, 2002 (article)

[BibTex]

[BibTex]


no image
Contrast discrimination with pulse-trains in pink noise

Henning, G., Bird, C., Wichmann, F.

Journal of the Optical Society of America A, 19(7), pages: 1259-1266, 2002 (article)

Abstract
Detection performance was measured with sinusoidal and pulse-train gratings. Although the 2.09-c/deg pulse-train, or line gratings, contained at least 8 harmonics all at equal contrast, they were no more detectable than their most detectable component. The addition of broadband pink noise designed to equalize the detectability of the components of the pulse train made the pulse train about a factor of four more detectable than any of its components. However, in contrast-discrimination experiments, with a pedestal or masking grating of the same form and phase as the signal and 15% contrast, the noise did not affect the discrimination performance of the pulse train relative to that obtained with its sinusoidal components. We discuss the implications of these observations for models of early vision in particular the implications for possible sources of internal noise.

PDF [BibTex]

PDF [BibTex]