Header logo is ei


2011


no image
Learning Dynamic Tactile Sensing with Robust Vision-based Training

Kroemer, O., Lampert, C., Peters, J.

IEEE Transactions on Robotics, 27(3):545-557 , June 2011 (article)

Abstract
Dynamic tactile sensing is a fundamental ability to recognize materials and objects. However, while humans are born with partially developed dynamic tactile sensing and quickly master this skill, today's robots remain in their infancy. The development of such a sense requires not only better sensors but the right algorithms to deal with these sensors' data as well. For example, when classifying a material based on touch, the data are noisy, high-dimensional, and contain irrelevant signals as well as essential ones. Few classification methods from machine learning can deal with such problems. In this paper, we propose an efficient approach to infer suitable lower dimensional representations of the tactile data. In order to classify materials based on only the sense of touch, these representations are autonomously discovered using visual information of the surfaces during training. However, accurately pairing vision and tactile samples in real-robot applications is a difficult problem. The proposed approach, therefore, works with weak pairings between the modalities. Experiments show that the resulting approach is very robust and yields significantly higher classification performance based on only dynamic tactile sensing.

Web DOI [BibTex]

2011

Web DOI [BibTex]


no image
Algebraic polynomials and moments of stochastic integrals

Langovoy, M.

Statistics & Probability Letters, 81(6):627-631, June 2011 (article)

Abstract
We propose an algebraic method for proving estimates on moments of stochastic integrals. The method uses qualitative properties of roots of algebraic polynomials from certain general classes. As an application, we give a new proof of a variation of the Burkholder–Davis–Gundy inequality for the case of stochastic integrals with respect to real locally square integrable martingales. Further possible applications and extensions of the method are outlined.

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Inference for psychometric functions in the presence of nonstationary behavior

Fründ, I., Haenel, N., Wichmann, F.

Journal of Vision, 11(6):1-19, May 2011 (article)

Abstract
Measuring sensitivity is at the heart of psychophysics. Often, sensitivity is derived from estimates of the psychometric function. This function relates response probability to stimulus intensity. In estimating these response probabilities, most studies assume stationary observers: Responses are expected to be dependent only on the intensity of a presented stimulus and not on other factors such as stimulus sequence, duration of the experiment, or the responses on previous trials. Unfortunately, a number of factors such as learning, fatigue, or fluctuations in attention and motivation will typically result in violations of this assumption. The severity of these violations is yet unknown. We use Monte Carlo simulations to show that violations of these assumptions can result in underestimation of confidence intervals for parameters of the psychometric function. Even worse, collecting more trials does not eliminate this misestimation of confidence intervals. We present a simple adjustment of the confidence intervals that corrects for the underestimation almost independently of the number of trials and the particular type of violation.

Web DOI [BibTex]

Web DOI [BibTex]


no image
Transition from the locked in to the completely locked-in state: A physiological analysis

Ramos Murguialday, A., Hill, J., Bensch, M., Martens, S., Halder, S., Nijboer, F., Schölkopf, B., Birbaumer, N., Gharabaghi, A.

Clinical Neurophysiology, 122(5):925-933 , May 2011 (article)

Abstract
Objective To clarify the physiological and behavioral boundaries between locked-in (LIS) and the completely locked-in state (CLIS) (no voluntary eye movements, no communication possible) through electrophysiological data and to secure brain–computer-interface (BCI) communication. Methods Electromyography from facial muscles, external anal sphincter (EAS), electrooculography and electrocorticographic data during different psychophysiological tests were acquired to define electrophysiological differences in an amyotrophic lateral sclerosis (ALS) patient with an intracranially implanted grid of 112 electrodes for nine months while the patient passed from the LIS to the CLIS. Results At the very end of the LIS there was no facial muscle activity, nor external anal sphincter but eye control. Eye movements were slow and lasted for short periods only. During CLIS event related brain potentials (ERP) to passive limb movements and auditory stimuli were recorded, vibrotactile stimulation of different body parts resulted in no ERP response. Conclusions The results presented contradict the commonly accepted assumption that the EAS is the last remaining muscle under voluntary control and demonstrate complete loss of eye movements in CLIS. The eye muscle was shown to be the last muscle group under voluntary control. The findings suggest ALS as a multisystem disorder, even affecting afferent sensory pathways. Significance Auditory and proprioceptive brain–computer-interface (BCI) systems are the only remaining communication channels in CLIS.

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Incremental online sparsification for model learning in real-time robot control

Nguyen-Tuong, D., Peters, J.

Neurocomputing, 74(11):1859-1867, May 2011 (article)

Abstract
For many applications such as compliant, accurate robot tracking control, dynamics models learned from data can help to achieve both compliant control performance as well as high tracking quality. Online learning of these dynamics models allows the robot controller to adapt itself to changes in the dynamics (e.g., due to time-variant nonlinearities or unforeseen loads). However, online learning in real-time applications -- as required in control -- cannot be realized by straightforward usage of off-the-shelf machine learning methods such as Gaussian process regression or support vector regression. In this paper, we propose a framework for online, incremental sparsification with a fixed budget designed for fast real-time model learning. The proposed approach employs a sparsification method based on an independence measure. In combination with an incremental learning approach such as incremental Gaussian process regression, we obtain a model approximation method which is applicable in real-time online learning. It exhibits competitive learning accuracy when compared with standard regression techniques. Implementation on a real Barrett WAM robot demonstrates the applicability of the approach in real-time online model learning for real world systems.

PDF DOI [BibTex]


no image
Causal Influence of Gamma Oscillations on the Sensorimotor Rhythm

Grosse-Wentrup, M., Schölkopf, B., Hill, J.

NeuroImage, 56(2):837-842, May 2011 (article)

Abstract
Gamma oscillations of the electromagnetic field of the brain are known to be involved in a variety of cognitive processes, and are believed to be fundamental for information processing within the brain. While gamma oscillations have been shown to be correlated with brain rhythms at different frequencies, to date no empirical evidence has been presented that supports a causal influence of gamma oscillations on other brain rhythms. In this work, we study the relation of gamma oscillations and the sensorimotor rhythm (SMR) in healthy human subjects using electroencephalography. We first demonstrate that modulation of the SMR, induced by motor imagery of either the left or right hand, is positively correlated with the power of frontal and occipital gamma oscillations, and negatively correlated with the power of centro-parietal gamma oscillations. We then demonstrate that the most simple causal structure, capable of explaining the observed correlation of gamma oscillations and the SMR, entails a causal influence of gamma oscillations on the SMR. This finding supports the fundamental role attributed to gamma oscillations for information processing within the brain, and is of particular importance for brain–computer interfaces (BCIs). As modulation of the SMR is typically used in BCIs to infer a subject's intention, our findings entail that gamma oscillations have a causal influence on a subject's capability to utilize a BCI for means of communication.

PDF DOI [BibTex]


no image
The effect of patient positioning aids on PET quantification in PET/MR imaging

Mantlik, F., Hofmann, M., Werner, M., Sauter, A., Kupferschläger, J., Schölkopf, B., Pichler, B., Beyer, T.

European Journal of Nuclear Medicine and Molecular Imaging, 38(5):920-929, May 2011 (article)

Abstract
Objectives Clinical PET/MR requires the use of patient positioning aids to immobilize and support patients for the duration of the combined examination. Ancillary immobilization devices contribute to overall attenuation of the PET signal, but are not detected with conventional MR sequences and, hence, are ignored in standard MR-based attenuation correction (MR-AC). We report on the quantitative effect of not accounting for the attenuation of patient positioning aids in combined PET/MR imaging. Methods We used phantom and patient data acquired with positioning aids on a PET/CT scanner (Biograph 16, HI-REZ) to mimic PET/MR imaging conditions. Reference CT-based attenuation maps were generated from measured (original) CT transmission images (origCT-AC). We also created MR-like attenuation maps by following the same conversion procedure of the attenuation values except for the prior delineation and subtraction of the positioning aids from the CT images (modCT-AC). First, a uniform 68Ge cylinder was positioned centrally in the PET/CT scanner and fixed with a vacuum mattress (10 cm thick) and, in a repeat examination, with MR positioning foam pads. Second, 16 patient datasets were selected for subsequent processing. All patients were regionally immobilized with positioning aids: a vacuum mattress for head/neck imaging (nine patients) and a foam mattress for imaging of the lower extremities (seven patients). PET images were reconstructed following CT-based attenuation and scatter correction using the original and modified (MR-like) CT images: PETorigCT-AC and PETmodCT-AC, respectively. PET images following origCT-AC and modCT-AC were compared visually and in terms of mean differences of voxels with a standardized uptake value of at least 1.0. In addition, we report maximum activity concentration in lesions for selected patients. Results In the phantom study employing the vacuum mattress the average voxel activity in PETmodCT-AC was underestimated by 6.4% compared to PETorigCT-AC, with 3.4% of the PET voxels being underestimated by 10% or more. When the MR foam pads were not accounted for during AC, PETmodCT-AC was underestimated by 1.1% on average, with none of the PET voxels being underestimated by 10% or more. Evaluation of the head/neck patient data showed a decrease of 8.4% ([68Ga]DOTATOC) and 7.4% ([18F]FDG) when patient positioning aids were not accounted for during AC, while the corresponding decrease was insignificant for the lower extremities. Conclusion Depending on the size and density of the positioning aids used, a regionally variable underestimation of PET activity following AC is observed when positioning aids are not accounted for. This underestimation may become relevant in combined PET/MR imaging of patients with neuropsychiatric indications, but appears to be of no clinical relevance in imaging the extremities.

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Improving quantification of functional networks with EEG inverse problem: Evidence from a decoding point of view

Besserve, M., Martinerie, J., Garnero, L.

NeuroImage, 55(4):1536-1547, April 2011 (article)

Abstract
Decoding experimental conditions from single trial Electroencephalographic (EEG) signals is becoming a major challenge for the study of brain function and real-time applications such as Brain Computer Interface. EEG source reconstruction offers principled ways to estimate the cortical activities from EEG signals. But to what extent it can enhance informative brain signals in single trial has not been addressed in a general setting. We tested this using the minimum norm estimate solution (MNE) to estimate spectral power and coherence features at the cortical level. With a fast implementation, we computed a support vector machine (SVM) classifier output from these quantities in real-time, without prior on the relevant functional networks. We applied this approach to single trial decoding of ongoing mental imagery tasks using EEG data recorded in 5 subjects. Our results show that reconstructing the underlying cortical network dynamics significantly outperforms a usual electrode level approach in terms of information transfer and also reduces redundancy between coherence and power features, supporting a decrease of volume conduction effects. Additionally, the classifier coefficients reflect the most informative features of network activity, showing an important contribution of localized motor and sensory brain areas, and of coherence between areas up to 6 cm distance. This study provides a computationally efficient and interpretable strategy to extract information from functional networks at the cortical level in single trial. Moreover, this sets a general framework to evaluate the performance of EEG source reconstruction methods by their decoding abilities.

Web DOI [BibTex]


no image
Using brain–computer interfaces to induce neural plasticity and restore function

Grosse-Wentrup, M., Mattia, D., Oweiss, K.

Journal of Neural Engineering, 8(2):1-5, April 2011 (article)

Abstract
Analyzing neural signals and providing feedback in real-time is one of the core characteristics of a brain-computer interface (BCI). As this feature may be employed to induce neural plasticity, utilizing BCI-technology for therapeutic purposes is increasingly gaining popularity in the BCI-community. In this review, we discuss the state-of-the-art of research on this topic, address the principles of and challenges in inducing neural plasticity by means of a BCI, and delineate the problems of study design and outcome evaluation arising in this context. The review concludes with a list of open questions and recommendations for future research in this field.

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
EPIBLASTER-fast exhaustive two-locus epistasis detection strategy using graphical processing units

Kam-Thong, T., Czamara, D., Tsuda, K., Borgwardt, K., Lewis, C., Erhardt-Lehmann, A., Hemmer, B., Rieckmann, P., Daake, M., Weber, F., Wolf, C., Ziegler, A., Pütz, B., Holsboer, F., Schölkopf, B., Müller-Myhsok, B.

European Journal of Human Genetics, 19(4):465-471, April 2011 (article)

Abstract
Detection of epistatic interaction between loci has been postulated to provide a more in-depth understanding of the complex biological and biochemical pathways underlying human diseases. Studying the interaction between two loci is the natural progression following traditional and well-established single locus analysis. However, the added costs and time duration required for the computation involved have thus far deterred researchers from pursuing a genome-wide analysis of epistasis. In this paper, we propose a method allowing such analysis to be conducted very rapidly. The method, dubbed EPIBLASTER, is applicable to case–control studies and consists of a two-step process in which the difference in Pearson‘s correlation coefficients is computed between controls and cases across all possible SNP pairs as an indication of significant interaction warranting further analysis. For the subset of interactions deemed potentially significant, a second-stage analysis is performed using the likelihood ratio test from the logistic regression to obtain the P-value for the estimated coefficients of the individual effects and the interaction term. The algorithm is implemented using the parallel computational capability of commercially available graphical processing units to greatly reduce the computation time involved. In the current setup and example data sets (211 cases, 222 controls, 299468 SNPs; and 601 cases, 825 controls, 291095 SNPs), this coefficient evaluation stage can be completed in roughly 1 day. Our method allows for exhaustive and rapid detection of significant SNP pair interactions without imposing significant marginal effects of the single loci involved in the pair.

PDF PDF DOI [BibTex]

PDF PDF DOI [BibTex]


no image
Model learning for robot control: a survey

Nguyen-Tuong, D., Peters, J.

Cognitive Processing, 12(4):319-340, April 2011 (article)

Abstract
Models are among the most essential tools in robotics, such as kinematics and dynamics models of the robot’s own body and controllable external objects. It is widely believed that intelligent mammals also rely on internal models in order to generate their actions. However, while classical robotics relies on manually generated models that are based on human insights into physics, future autonomous, cognitive robots need to be able to automatically generate models that are based on information which is extracted from the data streams accessible to the robot. In this paper, we survey the progress in model learning with a strong focus on robot control on a kinematic as well as dynamical level. Here, a model describes essential information about the behavior of the environment and the influence of an agent on this environment. In the context of model-based learning control, we view the model from three different perspectives. First, we need to study the different possible model learning architectures for robotics. Second, we discuss what kind of problems these architecture and the domain of robotics imply for the applicable learning methods. From this discussion, we deduce future directions of real-time learning algorithms. Third, we show where these scenarios have been used successfully in several case studies.

PDF PDF DOI [BibTex]

PDF PDF DOI [BibTex]


no image
Critical issues in state-of-the-art brain–computer interface signal processing

Krusienski, D., Grosse-Wentrup, M., Galan, F., Coyle, D., Miller, K., Forney, E., Anderson, C.

Journal of Neural Engineering, 8(2):1-8, April 2011 (article)

Abstract
This paper reviews several critical issues facing signal processing for brain–computer interfaces (BCIs) and suggests several recent approaches that should be further examined. The topics were selected based on discussions held during the 4th International BCI Meeting at a workshop organized to review and evaluate the current state of, and issues relevant to, feature extraction and translation of field potentials for BCIs. The topics presented in this paper include the relationship between electroencephalography and electrocorticography, novel features for performance prediction, time-embedded signal representations, phase information, signal non-stationarity, and unsupervised adaptation.

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Dynamics of excitable neural networks with heterogeneous connectivity

Chavez, M., Besserve, M., Le Van Quyen, M.

Progress in Biophysics and Molecular Biology, 105(1-2):29-33, March 2011 (article)

Abstract
A central issue of neuroscience is to understand how neural units integrates internal and external signals to create coherent states. Recently, it has been shown that the sensitivity and dynamic range of neural assemblies are optimal at a critical coupling among its elements. Complex architectures of connections seem to play a constructive role on the reliable coordination of neural units. Here we show that, the synchronizability and sensitivity of excitable neural networks can be tuned by diversity in the connections strengths. We illustrate our findings for weighted networks with regular, random and complex topologies. Additional comparisons of real brain networks support previous studies suggesting that heterogeneity in the connectivity may play a constructive role on information processing. These findings provide insights into the relationship between structure and function of neural circuits.

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
A Blind Deconvolution Approach for Improving the Resolution of Cryo-EM Density Maps

Hirsch, M., Schölkopf, B., Habeck, M.

Journal of Computational Biology, 18(3):335-346, March 2011 (article)

Abstract
Cryo-electron microscopy (cryo-EM) plays an increasingly prominent role in structure elucidation of macromolecular assemblies. Advances in experimental instrumentation and computational power have spawned numerous cryo-EM studies of large biomolecular complexes resulting in the reconstruction of three-dimensional density maps at intermediate and low resolution. In this resolution range, identification and interpretation of structural elements and modeling of biomolecular structure with atomic detail becomes problematic. In this article, we present a novel algorithm that enhances the resolution of intermediate- and low-resolution density maps. Our underlying assumption is to model the low-resolution density map as a blurred and possibly noise-corrupted version of an unknown high-resolution map that we seek to recover by deconvolution. By exploiting the nonnegativity of both the high-resolution map and blur kernel, we derive multiplicative updates reminiscent of those used in nonnegative matrix factorization. Our framework allows for easy incorporation of additional prior knowledge such as smoothness and sparseness, on both the sharpened density map and the blur kernel. A probabilistic formulation enables us to derive updates for the hyperparameters; therefore, our approach has no parameter that needs adjustment. We apply the algorithm to simulated three-dimensional electron microscopic data. We show that our method provides better resolved density maps when compared with B-factor sharpening, especially in the presence of noise. Moreover, our method can use additional information provided by homologous structures, which helps to improve the resolution even further.

Web DOI [BibTex]

Web DOI [BibTex]


no image
Combining computational modeling with sparse and low-resolution data

Habeck, M., Nilges, M.

Journal of Structural Biology, 173(3):419, March 2011 (article)

Abstract
Structural biology is moving into a new era by shifting its focus from static structures of single proteins and protein domains to large and often fragile multi-component complexes. Over the past decade, structural genomics initiatives aimed to fill the voids in fold space and to provide a census of all protein structures. Completion of such an atlas of protein structures is still ongoing, but not sufficient for a mechanistic understanding of how living cells function. One of the great challenges is to bridge the gap between atomic resolution detail and the more fuzzy description of the molecular complexes that govern cellular processes or host–pathogen interactions. We want to move from cartoon-like representations of multi-component complexes to atomic resolution structures. To characterize the structures of the increasingly large and often flexible complexes, high resolution structure determination (as was possible for example for the ribosome) will likely stay the exception. Rather, data from many different methods providing information on the shape (X-ray crystallography, electron microscopy, SAXS, AFM, etc.) or on contacts between components (mass spectrometry, co-purification, or spectroscopic methods) need to be integrated with prior structural knowledge to build a consistent model of the complex. A particular difficulty is that the ratio between the number of conformational degrees of freedom and the number of measurements becomes unfavorable as we work with large complexes: data become increasingly sparse. Structural characterization of large molecular assemblies often involves a loss in resolution as well as in number and quality of data. We are good at solving structures of single proteins, but classical high-resolution structure determination by X-ray crystallography and NMR spectroscopy is often facing its limits as we move to higher molecular mass and increased flexibility. Therefore, structural studies on large complexes rely on new experimental techniques that complement the classical high resolution methods. But also computational approaches are becoming more important when it comes to integrating and analyzing structural information of often heterogeneous nature. Cryoelectron microscopy may serve as an example of how experimental methods can benefit from computation. Low-resolution data from cryo-EM show their true power when combined with modeling and bioinformatics methods such rigid docking and secondary structure hunting. Even in high resolution structure determination, molecular modeling is always necessary to calculate structures from data, to complement the missing information and to evaluate and score the obtained structures. With sparse data, all these three aspects become increasingly difficult, and the quality of the modeling approach becomes more important. With data alone, algorithms may not converge any more; scoring against data becomes meaningless; and the potential energy function becomes central not only as a help in making algorithms converge but also to score and evaluate the structures. In addition to the sparsity of the data, hybrid approaches bring the additional difficulty that the different sources of data may have rather different quality, and may be in the extreme case incompatible with each other. In addition to scoring the structures, modeling should also score in some way the data going into the calculation. This special issue brings together some of the numerous efforts to solve the problems that come from sparsity of data and from integrating data from different sources in hybrid approaches. The methods range from predominantly force-field based to mostly data based. Systems of very different sizes, ranging from single domains to multi-component complexes, are treated. We hope that you will enjoy reading the issue and find it a useful and inspiring resource.

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Statistical mechanics analysis of sparse data

Habeck, M.

Journal of Structural Biology, 173(3):541-548, March 2011 (article)

Abstract
Inferential structure determination uses Bayesian theory to combine experimental data with prior structural knowledge into a posterior probability distribution over protein conformational space. The posterior distribution encodes everything one can say objectively about the native structure in the light of the available data and additional prior assumptions and can be searched for structural representatives. Here an analogy is drawn between the posterior distribution and the canonical ensemble of statistical physics. A statistical mechanics analysis assesses the complexity of a structure calculation globally in terms of ensemble properties. Analogs of the free energy and density of states are introduced; partition functions evaluate the consistency of prior assumptions with data. Critical behavior is observed with dwindling restraint density, which impairs structure determination with too sparse data. However, prior distributions with improved realism ameliorate the situation by lowering the critical number of observations. An in-depth analysis of various experimentally accessible structural parameters and force field terms will facilitate a statistical approach to protein structure determination with sparse data that avoids bias as much as possible.

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Large Scale Bayesian Inference and Experimental Design for Sparse Linear Models

Seeger, M., Nickisch, H.

SIAM Journal on Imaging Sciences, 4(1):166-199, March 2011 (article)

Abstract
Many problems of low-level computer vision and image processing, such as denoising, deconvolution, tomographic reconstruction or super-resolution, can be addressed by maximizing the posterior distribution of a sparse linear model (SLM). We show how higher-order Bayesian decision-making problems, such as optimizing image acquisition in magnetic resonance scanners, can be addressed by querying the SLM posterior covariance, unrelated to the density‘s mode. We propose a scalable algorithmic framework, with which SLM posteriors over full, high-resolution images can be approximated for the first time, solving a variational optimization problem which is convex iff posterior mode finding is convex. These methods successfully drive the optimization of sampling trajectories for real-world magnetic resonance imaging through Bayesian experimental design, which has not been attempted before. Our methodology provides new insight into similarities and differences between sparse reconstruction and approximate Bayesian inference, and has important implications for compressive sensing of real-world images.

Web DOI [BibTex]


no image
Batch-Mode Active-Learning Methods for the Interactive Classification of Remote Sensing Images

Demir, B., Persello, C., Bruzzone, L.

IEEE Transactions on Geoscience and Remote Sensing, 49(3):1014-1031, March 2011 (article)

Abstract
This paper investigates different batch-mode active-learning (AL) techniques for the classification of remote sensing (RS) images with support vector machines. This is done by generalizing to multiclass problem techniques defined for binary classifiers. The investigated techniques exploit different query functions, which are based on the evaluation of two criteria: uncertainty and diversity. The uncertainty criterion is associated to the confidence of the supervised algorithm in correctly classifying the considered sample, while the diversity criterion aims at selecting a set of unlabeled samples that are as more diverse (distant one another) as possible, thus reducing the redundancy among the selected samples. The combination of the two criteria results in the selection of the potentially most informative set of samples at each iteration of the AL process. Moreover, we propose a novel query function that is based on a kernel-clustering technique for assessing the diversity of samples and a new strategy for selecting the most informative representative sample from each cluster. The investigated and proposed techniques are theoretically and experimentally compared with state-of-the-art methods adopted for RS applications. This is accomplished by considering very high resolution multispectral and hyperspectral images. By this comparison, we observed that the proposed method resulted in better accuracy with respect to other investigated and state-of-the art methods on both the considered data sets. Furthermore, we derived some guidelines on the design of AL systems for the classification of different types of RS images.

Web DOI [BibTex]

Web DOI [BibTex]


no image
Learning grasp affordance densities

Detry, R., Kraft, D., Kroemer, O., Peters, J., Krüger, N., Piater, J.

Paladyn: Journal of Behavioral Robotics, 2(1):1-17, March 2011 (article)

Abstract
We address the issue of learning and representing object grasp affordance models. We model grasp affordances with continuous probability density functions (grasp densities) which link object-relative grasp poses to their success probability. The underlying function representation is nonparametric and relies on kernel density estimation to provide a continuous model. Grasp densities are learned and refined from exploration, by letting a robot “play” with an object in a sequence of grasp-and-drop actions: the robot uses visual cues to generate a set of grasp hypotheses, which it then executes and records their outcomes. When a satisfactory amount of grasp data is available, an importance-sampling algorithm turns it into a grasp density. We evaluate our method in a largely autonomous learning experiment, run on three objects with distinct shapes. The experiment shows how learning increases success rates. It also measures the success rate of grasps chosen to maximize the probability of success, given reaching constraints.

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Client–Server Multitask Learning From Distributed Datasets

Dinuzzo, F., Pillonetto, G., De Nicolao, G.

IEEE Transactions on Neural Networks, 22(2):290-303, February 2011 (article)

Abstract
A client-server architecture to simultaneously solve multiple learning tasks from distributed datasets is described. In such architecture, each client corresponds to an individual learning task and the associated dataset of examples. The goal of the architecture is to perform information fusion from multiple datasets while preserving privacy of individual data. The role of the server is to collect data in real time from the clients and codify the information in a common database. Such information can be used by all the clients to solve their individual learning task, so that each client can exploit the information content of all the datasets without actually having access to private data of others. The proposed algorithmic framework, based on regularization and kernel methods, uses a suitable class of “mixed effect” kernels. The methodology is illustrated through a simulated recommendation system, as well as an experiment involving pharmacological data coming from a multicentric clinical trial.

DOI [BibTex]

DOI [BibTex]


no image
Learning Visual Representations for Perception-Action Systems

Piater, J., Jodogne, S., Detry, R., Kraft, D., Krüger, N., Kroemer, O., Peters, J.

International Journal of Robotics Research, 30(3):294-307, February 2011 (article)

Abstract
We discuss vision as a sensory modality for systems that interact flexibly with uncontrolled environments. Instead of trying to build a generic vision system that produces task-independent representations, we argue in favor of task-specific, learnable representations. This concept is illustrated by two examples of our own work. First, our RLVC algorithm performs reinforcement learning directly on the visual input space. To make this very large space manageable, RLVC interleaves the reinforcement learner with a supervised classification algorithm that seeks to split perceptual states so as to reduce perceptual aliasing. This results in an adaptive discretization of the perceptual space based on the presence or absence of visual features. Its extension, RLJC, additionally handles continuous action spaces. In contrast to the minimalistic visual representations produced by RLVC and RLJC, our second method learns structural object models for robust object detection and pose estimation by probabilistic inference. To these models, the method associates grasp experiences autonomously learned by trial and error. These experiences form a non-parametric representation of grasp success likelihoods over gripper poses, which we call a grasp density. Thus, object detection in a novel scene simultaneously produces suitable grasping options.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Multi-way set enumeration in weight tensors

Georgii, E., Tsuda, K., Schölkopf, B.

Machine Learning, 82(2):123-155, February 2011 (article)

Abstract
The analysis of n-ary relations receives attention in many different fields, for instance biology, web mining, and social studies. In the basic setting, there are n sets of instances, and each observation associates n instances, one from each set. A common approach to explore these n-way data is the search for n-set patterns, the n-way equivalent of itemsets. More precisely, an n-set pattern consists of specific subsets of the n instance sets such that all possible associations between the corresponding instances are observed in the data. In contrast, traditional itemset mining approaches consider only two-way data, namely items versus transactions. The n-set patterns provide a higher-level view of the data, revealing associative relationships between groups of instances. Here, we generalize this approach in two respects. First, we tolerate missing observations to a certain degree, that means we are also interested in n-sets where most (although not all) of the possible associations have been recorded in the data. Second, we take association weights into account. In fact, we propose a method to enumerate all n-sets that satisfy a minimum threshold with respect to the average association weight. Technically, we solve the enumeration task using a reverse search strategy, which allows for effective pruning of the search space. In addition, our algorithm provides a ranking of the solutions and can consider further constraints. We show experimental results on artificial and real-world datasets from different domains.

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Extraction of functional information from ongoing brain electrical activity: Extraction en temps-réel d’informations fonctionnelles à partir de l’activité électrique cérébrale

Besserve, M., Martinerie, J.

IRBM, 32(1):27-34, February 2011 (article)

Abstract
The modern analysis of multivariate electrical brain signals requires advanced statistical tools to automatically extract and quantify their information content. These tools include machine learning techniques and information theory. They are currently used both in basic neuroscience and challenging applications such as brain computer interfaces. We review here how these methods have been used at the Laboratoire d’Électroencéphalographie et de Neurophysiologie Appliquée (LENA) to develop a general tool for the real time analysis of functional brain signals. We then give some perspectives on how these tools can help understanding the biological mechanisms of information processing.

PDF DOI [BibTex]


no image
A graphical model framework for decoding in the visual ERP-based BCI speller

Martens, S., Mooij, J., Hill, N., Farquhar, J., Schölkopf, B.

Neural Computation, 23(1):160-182, January 2011 (article)

Abstract
We present a graphical model framework for decoding in the visual ERP-based speller system. The proposed framework allows researchers to build generative models from which the decoding rules are obtained in a straightforward manner. We suggest two models for generating brain signals conditioned on the stimulus events. Both models incorporate letter frequency information but assume different dependencies between brain signals and stimulus events. For both models, we derive decoding rules and perform a discriminative training. We show on real visual speller data how decoding performance improves by incorporating letter frequency information and using a more realistic graphical model for the dependencies between the brain signals and the stimulus events. Furthermore, we discuss how the standard approach to decoding can be seen as a special case of the graphical model framework. The letter also gives more insight into the discriminative approach for decoding in the visual speller system.

Web DOI [BibTex]

Web DOI [BibTex]


no image
Robust Control of Teleoperation Systems Interacting with Viscoelastic Soft Tissues

Cho, JH., Son, HI., Bhattacharjee, T., Lee, DG., Lee, DY.

IEEE Transactions on Control Systems Technology, January 2011 (article) In revision

[BibTex]

[BibTex]


no image
Effect of Control Parameters and Haptic Cues on Human Perception for Remote Operations

Son, HI., Bhattacharjee, T., Jung, H., Lee, DY.

Experimental Brain Research, January 2011 (article) Submitted

[BibTex]

[BibTex]


no image
Joint Genetic Analysis of Gene Expression Data with Inferred Cellular Phenotypes

Parts, L., Stegle, O., Winn, J., Durbin, R.

PLoS Genetics, 7(1):1-10, January 2011 (article)

Abstract
Even within a defined cell type, the expression level of a gene differs in individual samples. The effects of genotype, measured factors such as environmental conditions, and their interactions have been explored in recent studies. Methods have also been developed to identify unmeasured intermediate factors that coherently influence transcript levels of multiple genes. Here, we show how to bring these two approaches together and analyse genetic effects in the context of inferred determinants of gene expression. We use a sparse factor analysis model to infer hidden factors, which we treat as intermediate cellular phenotypes that in turn affect gene expression in a yeast dataset. We find that the inferred phenotypes are associated with locus genotypes and environmental conditions and can explain genetic associations to genes in trans. For the first time, we consider and find interactions between genotype and intermediate phenotypes inferred from gene expression levels, complementing and extending established results.

Web DOI [BibTex]

Web DOI [BibTex]


no image
Reinforcement Learning with Bounded Information Loss

Peters, J., Peters, J., Mülling, K., Altun, Y.

AIP Conference Proceedings, 1305(1):365-372, 2011 (article)

Abstract
Policy search is a successful approach to reinforcement learning. However, policy improvements often result in the loss of information. Hence, it has been marred by premature convergence and implausible solutions. As first suggested in the context of covariant or natural policy gradients, many of these problems may be addressed by constraining the information loss. In this paper, we continue this path of reasoning and suggest two reinforcement learning methods, i.e., a model‐based and a model free algorithm that bound the loss in relative entropy while maximizing their return. The resulting methods differ significantly from previous policy gradient approaches and yields an exact update step. It works well on typical reinforcement learning benchmark problems as well as novel evaluations in robotics. We also show a Bayesian bound motivation of this new approach [8].

Web DOI [BibTex]

2007


no image
A Tutorial on Spectral Clustering

von Luxburg, U.

Statistics and Computing, 17(4):395-416, December 2007 (article)

Abstract
In recent years, spectral clustering has become one of the most popular modern clustering algorithms. It is simple to implement, can be solved efficiently by standard linear algebra software, and very often outperforms traditional clustering algorithms such as the k-means algorithm. On the first glance spectral clustering appears slightly mysterious, and it is not obvious to see why it works at all and what it really does. The goal of this tutorial is to give some intuition on those questions. We describe different graph Laplacians and their basic properties, present the most common spectral clustering algorithms, and derive those algorithms from scratch by several different approaches. Advantages and disadvantages of the different spectral clustering algorithms are discussed.

PDF PDF DOI [BibTex]

2007

PDF PDF DOI [BibTex]


no image
A Tutorial on Kernel Methods for Categorization

Jäkel, F., Schölkopf, B., Wichmann, F.

Journal of Mathematical Psychology, 51(6):343-358, December 2007 (article)

Abstract
The abilities to learn and to categorize are fundamental for cognitive systems, be it animals or machines, and therefore have attracted attention from engineers and psychologists alike. Modern machine learning methods and psychological models of categorization are remarkably similar, partly because these two fields share a common history in artificial neural networks and reinforcement learning. However, machine learning is now an independent and mature field that has moved beyond psychologically or neurally inspired algorithms towards providing foundations for a theory of learning that is rooted in statistics and functional analysis. Much of this research is potentially interesting for psychological theories of learning and categorization but also hardly accessible for psychologists. Here, we provide a tutorial introduction to a popular class of machine learning tools, called kernel methods. These methods are closely related to perceptrons, radial-basis-function neural networks and exemplar theories of catego rization. Recent theoretical advances in machine learning are closely tied to the idea that the similarity of patterns can be encapsulated in a positive definite kernel. Such a positive definite kernel can define a reproducing kernel Hilbert space which allows one to use powerful tools from functional analysis for the analysis of learning algorithms. We give basic explanations of some key concepts—the so-called kernel trick, the representer theorem and regularization—which may open up the possibility that insights from machine learning can feed back into psychology.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Accurate Splice site Prediction Using Support Vector Machines

Sonnenburg, S., Schweikert, G., Philips, P., Behr, J., Rätsch, G.

BMC Bioinformatics, 8(Supplement 10):1-16, December 2007 (article)

Abstract
Background: For splice site recognition, one has to solve two classification problems: discriminating true from decoy splice sites for both acceptor and donor sites. Gene finding systems typically rely on Markov Chains to solve these tasks. Results: In this work we consider Support Vector Machines for splice site recognition. We employ the so-called weighted degree kernel which turns out well suited for this task, as we will illustrate in several experiments where we compare its prediction accuracy with that of recently proposed systems. We apply our method to the genome-wide recognition of splice sites in Caenorhabditis elegans, Drosophila melanogaster, Arabidopsis thaliana, Danio rerio, and Homo sapiens. Our performance estimates indicate that splice sites can be recognized very accurately in these genomes and that our method outperforms many other methods including Markov Chains, GeneSplicer and SpliceMachine. We provide genome-wide predictions of splice sites and a stand-alone prediction tool ready to be used for incorporation in a gene finder. Availability: Data, splits, additional information on the model selection, the whole genome predictions, as well as the stand-alone prediction tool are available for download at http:// www.fml.mpg.de/raetsch/projects/splice.

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
A unifying framework for robot control with redundant DOFs

Peters, J., Mistry, M., Udwadia, F., Nakanishi, J., Schaal, S.

Autonomous Robots, 24(1):1-12, October 2007 (article)

Abstract
Recently, Udwadia (Proc. R. Soc. Lond. A 2003:1783–1800, 2003) suggested to derive tracking controllers for mechanical systems with redundant degrees-of-freedom (DOFs) using a generalization of Gauss’ principle of least constraint. This method allows reformulating control problems as a special class of optimal controllers. In this paper, we take this line of reasoning one step further and demonstrate that several well-known and also novel nonlinear robot control laws can be derived from this generic methodology. We show experimental verifications on a Sarcos Master Arm robot for some of the derived controllers. The suggested approach offers a promising unification and simplification of nonlinear control law design for robots obeying rigid body dynamics equations, both with or without external constraints, with over-actuation or underactuation, as well as open-chain and closed-chain kinematics.

PDF PDF DOI [BibTex]

PDF PDF DOI [BibTex]


no image
The Need for Open Source Software in Machine Learning

Sonnenburg, S., Braun, M., Ong, C., Bengio, S., Bottou, L., Holmes, G., LeCun, Y., Müller, K., Pereira, F., Rasmussen, C., Rätsch, G., Schölkopf, B., Smola, A., Vincent, P., Weston, J., Williamson, R.

Journal of Machine Learning Research, 8, pages: 2443-2466, October 2007 (article)

Abstract
Open source tools have recently reached a level of maturity which makes them suitable for building large-scale real-world systems. At the same time, the field of machine learning has developed a large body of powerful learning algorithms for diverse applications. However, the true potential of these methods is not realized, since existing implementations are not openly shared, resulting in software with low usability, and weak interoperability. We argue that this situation can be significantly improved by increasing incentives for researchers to publish their software under an open source model. Additionally, we outline the problems authors are faced with when trying to publish algorithmic implementations of machine learning methods. We believe that a resource of peer reviewed software accompanied by short articles would be highly valuable to both the machine learning and the general scientific community.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Some observations on the masking effects of Mach bands

Curnow, T., Cowie, DA., Henning, GB., Hill, NJ.

Journal of the Optical Society of America A, 24(10):3233-3241, October 2007 (article)

Abstract
There are 8 cycle / deg ripples or oscillations in performance as a function of location near Mach bands in experiments measuring Mach bands’ masking effects on random polarity signal bars. The oscillations with increments are 180 degrees out of phase with those for decrements. The oscillations, much larger than the measurement error, appear to relate to the weighting function of the spatial-frequency-tuned channel detecting the broad- band signals. The ripples disappear with step maskers and become much smaller at durations below 25 ms, implying either that the site of masking has changed or that the weighting function and hence spatial-frequency tuning is slow to develop.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Mining complex genotypic features for predicting HIV-1 drug resistance

Saigo, H., Uno, T., Tsuda, K.

Bioinformatics, 23(18):2455-2462, September 2007 (article)

Abstract
Human immunodeficiency virus type 1 (HIV-1) evolves in human body, and its exposure to a drug often causes mutations that enhance the resistance against the drug. To design an effective pharmacotherapy for an individual patient, it is important to accurately predict the drug resistance based on genotype data. Notably, the resistance is not just the simple sum of the effects of all mutations. Structural biological studies suggest that the association of mutations is crucial: Even if mutations A or B alone do not affect the resistance, a significant change might happen when the two mutations occur together. Linear regression methods cannot take the associations into account, while decision tree methods can reveal only limited associations. Kernel methods and neural networks implicitly use all possible associations for prediction, but cannot select salient associations explicitly. Our method, itemset boosting, performs linear regression in the complete space of power sets of mutations. It implements a forward feature selection procedure where, in each iteration, one mutation combination is found by an efficient branch-and-bound search. This method uses all possible combinations, and salient associations are explicitly shown. In experiments, our method worked particularly well for predicting the resistance of nucleotide reverse transcriptase inhibitors (NRTIs). Furthermore, it successfully recovered many mutation associations known in biological literature.

Web DOI [BibTex]

Web DOI [BibTex]


no image
Real-Time Fetal Heart Monitoring in Biomagnetic Measurements Using Adaptive Real-Time ICA

Waldert, S., Bensch, M., Bogdan, M., Rosenstiel, W., Schölkopf, B., Lowery, C., Eswaran, H., Preissl, H.

IEEE Transactions on Biomedical Engineering, 54(10):1867-1874, September 2007 (article)

Abstract
Electrophysiological signals of the developing fetal brain and heart can be investigated by fetal magnetoencephalography (fMEG). During such investigations, the fetal heart activity and that of the mother should be monitored continuously to provide an important indication of current well-being. Due to physical constraints of an fMEG system, it is not possible to use clinically established heart monitors for this purpose. Considering this constraint, we developed a real-time heart monitoring system for biomagnetic measurements and showed its reliability and applicability in research and for clinical examinations. The developed system consists of real-time access to fMEG data, an algorithm based on Independent Component Analysis (ICA), and a graphical user interface (GUI). The algorithm extracts the current fetal and maternal heart signal from a noisy and artifact-contaminated data stream in real-time and is able to adapt automatically to continuously varying environmental parameters. This algorithm has been na med Adaptive Real-time ICA (ARICA) and is applicable to real-time artifact removal as well as to related blind signal separation problems.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Feature Selection for Trouble Shooting in Complex Assembly Lines

Pfingsten, T., Herrmann, D., Schnitzler, T., Feustel, A., Schölkopf, B.

IEEE Transactions on Automation Science and Engineering, 4(3):465-469, July 2007 (article)

Abstract
The final properties of sophisticated products can be affected by many unapparent dependencies within the manufacturing process, and the products’ integrity can often only be checked in a final measurement. Troubleshooting can therefore be very tedious if not impossible in large assembly lines. In this paper we show that Feature Selection is an efficient tool for serial-grouped lines to reveal causes for irregularities in product attributes. We compare the performance of several methods for Feature Selection on real-world problems in mass-production of semiconductor devices. Note to Practitioners— We present a data based procedure to localize flaws in large production lines: using the results of final quality inspections and information about which machines processed which batches, we are able to identify machines which cause low yield.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Gene selection via the BAHSIC family of algorithms

Song, L., Bedo, J., Borgwardt, K., Gretton, A., Smola, A.

Bioinformatics, 23(13: ISMB/ECCB 2007 Conference Proceedings):i490-i498, July 2007 (article)

Abstract
Motivation: Identifying significant genes among thousands of sequences on a microarray is a central challenge for cancer research in bioinformatics. The ultimate goal is to detect the genes that are involved in disease outbreak and progression. A multitude of methods have been proposed for this task of feature selection, yet the selected gene lists differ greatly between different methods. To accomplish biologically meaningful gene selection from microarray data, we have to understand the theoretical connections and the differences between these methods. In this article, we define a kernel-based framework for feature selection based on the Hilbert–Schmidt independence criterion and backward elimination, called BAHSIC. We show that several well-known feature selectors are instances of BAHSIC, thereby clarifying their relationship. Furthermore, by choosing a different kernel, BAHSIC allows us to easily define novel feature selection algorithms. As a further advantage, feature selection via BAHSIC works directly on multiclass problems. Results: In a broad experimental evaluation, the members of the BAHSIC family reach high levels of accuracy and robustness when compared to other feature selection techniques. Experiments show that features selected with a linear kernel provide the best classification performance in general, but if strong non-linearities are present in the data then non-linear kernels can be more suitable.

Web DOI [BibTex]

Web DOI [BibTex]


no image
Phenotyping of Chondrocytes In Vivo and In Vitro Using cDNA Array Technology

Zien, A., Gebhard, P., Fundel, K., Aigner, T.

Clinical Orthopaedics and Related Research, 460, pages: 226-233, July 2007 (article)

Abstract
The cDNA array technology is a powerful tool to analyze a high number of genes in parallel. We investigated whether large-scale gene expression analysis allows clustering and identification of cellular phenotypes of chondrocytes in different in vivo and in vitro conditions. In 100% of cases, clustering analysis distinguished between in vivo and in vitro samples, suggesting fundamental differences in chondrocytes in situ and in vitro regardless of the culture conditions or disease status. It also allowed us to differentiate between healthy and osteoarthritic cartilage. The clustering also revealed the relative importance of the investigated culturing conditions (stimulation agent, stimulation time, bead/monolayer). We augmented the cluster analysis with a statistical search for genes showing differential expression. The identified genes provided hints to the molecular basis of the differences between the sample classes. Our approach shows the power of modern bioinformatic algorithms for understanding and class ifying chondrocytic phenotypes in vivo and in vitro. Although it does not generate new experimental data per se, it provides valuable information regarding the biology of chondrocytes and may provide tools for diagnosing and staging the osteoarthritic disease process.

DOI [BibTex]

DOI [BibTex]


no image
Common Sequence Polymorphisms Shaping Genetic Diversity in Arabidopsis thaliana

Clark, R., Schweikert, G., Toomajian, C., Ossowski, S., Zeller, G., Shinn, P., Warthmann, N., Hu, T., Fu, G., Hinds, D., Chen, H., Frazer, K., Huson, D., Schölkopf, B., Nordborg, M., Rätsch, G., Ecker, J., Weigel, D.

Science, 317(5836):338-342, July 2007 (article)

Abstract
The genomes of individuals from the same species vary in sequence as a result of different evolutionary processes. To examine the patterns of, and the forces shaping, sequence variation in Arabidopsis thaliana, we performed high-density array resequencing of 20 diverse strains (accessions). More than 1 million nonredundant single-nucleotide polymorphisms (SNPs) were identified at moderate false discovery rates (FDRs), and ~4% of the genome was identified as being highly dissimilar or deleted relative to the reference genome sequence. Patterns of polymorphism are highly nonrandom among gene families, with genes mediating interaction with the biotic environment having exceptional polymorphism levels. At the chromosomal scale, regional variation in polymorphism was readily apparent. A scan for recent selective sweeps revealed several candidate regions, including a notable example in which almost all variation was removed in a 500-kilobase window. Analyzing the polymorphisms we describe in larger sets of accessions will enable a detailed understanding of forces shaping population-wide sequence variation in A. thaliana.

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Graph Laplacians and their Convergence on Random Neighborhood Graphs

Hein, M., Audibert, J., von Luxburg, U.

Journal of Machine Learning Research, 8, pages: 1325-1370, June 2007 (article)

Abstract
Given a sample from a probability measure with support on a submanifold in Euclidean space one can construct a neighborhood graph which can be seen as an approximation of the submanifold. The graph Laplacian of such a graph is used in several machine learning methods like semi-supervised learning, dimensionality reduction and clustering. In this paper we determine the pointwise limit of three different graph Laplacians used in the literature as the sample size increases and the neighborhood size approaches zero. We show that for a uniform measure on the submanifold all graph Laplacians have the same limit up to constants. However in the case of a non-uniform measure on the submanifold only the so called random walk graph Laplacian converges to the weighted Laplace-Beltrami operator.

PDF PDF [BibTex]

PDF PDF [BibTex]


no image
Bayesian Reconstruction of the Density of States

Habeck, M.

Physical Review Letters, 98(20, 200601):1-4, May 2007 (article)

Abstract
A Bayesian framework is developed to reconstruct the density of states from multiple canonical simulations. The framework encompasses the histogram reweighting method of Ferrenberg and Swendsen. The new approach applies to nonparametric as well as parametric models and does not require simulation data to be discretized. It offers a means to assess the precision of the reconstructed density of states and of derived thermodynamic quantities.

Web DOI [BibTex]

Web DOI [BibTex]


no image
PALMA: mRNA to Genome Alignments using Large Margin Algorithms

Schulze, U., Hepp, B., Ong, C., Rätsch, G.

Bioinformatics, 23(15):1892-1900, May 2007 (article)

Abstract
Motivation: Despite many years of research on how to properly align sequences in the presence of sequencing errors, alternative splicing and micro-exons, the correct alignment of mRNA sequences to genomic DNA is still a challenging task. Results: We present a novel approach based on large margin learning that combines accurate plice site predictions with common sequence alignment techniques. By solving a convex optimization problem, our algorithm – called PALMA – tunes the parameters of the model such that true alignments score higher than other alignments. We study the accuracy of alignments of mRNAs containing artificially generated micro-exons to genomic DNA. In a carefully designed experiment, we show that our algorithm accurately identifies the intron boundaries as well as boundaries of the optimal local alignment. It outperforms all other methods: for 5702 artificially shortened EST sequences from C. elegans and human it correctly identifies the intron boundaries in all except two cases. The best other method is a recently proposed method called exalin which misaligns 37 of the sequences. Our method also demonstrates robustness to mutations, insertions and deletions, retaining accuracy even at high noise levels. Availability: Datasets for training, evaluation and testing, additional results and a stand-alone alignment tool implemented in C++ and python are available at http://www.fml.mpg.de/raetsch/projects/palma.

Web DOI [BibTex]

Web DOI [BibTex]


no image
Training a Support Vector Machine in the Primal

Chapelle, O.

Neural Computation, 19(5):1155-1178, March 2007 (article)

Abstract
Most literature on Support Vector Machines (SVMs) concentrate on the dual optimization problem. In this paper, we would like to point out that the primal problem can also be solved efficiently, both for linear and non-linear SVMs, and that there is no reason for ignoring this possibilty. On the contrary, from the primal point of view new families of algorithms for large scale SVM training can be investigated.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Improving the Caenorhabditis elegans Genome Annotation Using Machine Learning

Rätsch, G., Sonnenburg, S., Srinivasan, J., Witte, H., Müller, K., Sommer, R., Schölkopf, B.

PLoS Computational Biology, 3(2, e20):0313-0322, February 2007 (article)

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Statistical Consistency of Kernel Canonical Correlation Analysis

Fukumizu, K., Bach, F., Gretton, A.

Journal of Machine Learning Research, 8, pages: 361-383, February 2007 (article)

Abstract
While kernel canonical correlation analysis (CCA) has been applied in many contexts, the convergence of finite sample estimates of the associated functions to their population counterparts has not yet been established. This paper gives a mathematical proof of the statistical convergence of kernel CCA, providing a theoretical justification for the method. The proof uses covariance operators defined on reproducing kernel Hilbert spaces, and analyzes the convergence of their empirical estimates of finite rank to their population counterparts, which can have infinite rank. The result also gives a sufficient condition for convergence on the regularization coefficient involved in kernel CCA: this should decrease as n^{-1/3}, where n is the number of data.

PDF [BibTex]

PDF [BibTex]