Header logo is ei


2007


no image
Bayesian Inference for Sparse Generalized Linear Models

Seeger, M., Gerwinn, S., Bethge, M.

In ECML 2007, pages: 298-309, Lecture Notes in Computer Science ; 4701, (Editors: Kok, J. N., J. Koronacki, R. Lopez de Mantaras, S. Matwin, D. Mladenic, A. Skowron), Springer, Berlin, Germany, 18th European Conference on Machine Learning, September 2007 (inproceedings)

Abstract
We present a framework for efficient, accurate approximate Bayesian inference in generalized linear models (GLMs), based on the expectation propagation (EP) technique. The parameters can be endowed with a factorizing prior distribution, encoding properties such as sparsity or non-negativity. The central role of posterior log-concavity in Bayesian GLMs is emphasized and related to stability issues in EP. In particular, we use our technique to infer the parameters of a point process model for neuronal spiking data from multiple electrodes, demonstrating significantly superior predictive performance when a sparsity assumption is enforced via a Laplace prior distribution.

PDF DOI [BibTex]

2007

PDF DOI [BibTex]


no image
Implicit Surfaces with Globally Regularised and Compactly Supported Basis Functions

Walder, C., Schölkopf, B., Chapelle, O.

In Advances in Neural Information Processing Systems 19, pages: 273-280, (Editors: B Schölkopf and J Platt and T Hofmann), MIT Press, Cambridge, MA, USA, 20th Annual Conference on Neural Information Processing Systems (NIPS), September 2007 (inproceedings)

Abstract
We consider the problem of constructing a function whose zero set is to represent a surface, given sample points with surface normal vectors. The contributions include a novel means of regularising multi-scale compactly supported basis functions that leads to the desirable properties previously only associated with fully supported bases, and show equivalence to a Gaussian process with modified covariance function. We also provide a regularisation framework for simpler and more direct treatment of surface normals, along with a corresponding generalisation of the representer theorem. We demonstrate the techniques on 3D problems of up to 14 million data points, as well as 4D time series data.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Trading Convexity for Scalability

Collobert, R., Sinz, F., Weston, J., Bottou, L.

In Large Scale Kernel Machines, pages: 275-300, Neural Information Processing, (Editors: Bottou, L. , O. Chapelle, D. DeCoste, J. Weston), MIT Press, Cambridge, MA, USA, September 2007 (inbook)

Abstract
Convex learning algorithms, such as Support Vector Machines (SVMs), are often seen as highly desirable because they offer strong practical properties and are amenable to theoretical analysis. However, in this work we show how nonconvexity can provide scalability advantages over convexity. We show how concave-convex programming can be applied to produce (i) faster SVMs where training errors are no longer support vectors, and (ii) much faster Transductive SVMs.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Scalable Semidefinite Programming using Convex Perturbations

Kulis, B., Sra, S., Jegelka, S.

(TR-07-47), University of Texas, Austin, TX, USA, September 2007 (techreport)

Abstract
Several important machine learning problems can be modeled and solved via semidefinite programs. Often, researchers invoke off-the-shelf software for the associated optimization, which can be inappropriate for many applications due to computational and storage requirements. In this paper, we introduce the use of convex perturbations for semidefinite programs (SDPs). Using a particular perturbation function, we arrive at an algorithm for SDPs that has several advantages over existing techniques: a) it is simple, requiring only a few lines of MATLAB, b) it is a first-order method which makes it scalable, c) it can easily exploit the structure of a particular SDP to gain efficiency (e.g., when the constraint matrices are low-rank). We demonstrate on several machine learning applications that the proposed algorithm is effective in finding fast approximations to large-scale SDPs.

PDF [BibTex]

PDF [BibTex]


no image
Bayesian methods for NMR structure determination

Habeck, M.

29th Annual Discussion Meeting: Magnetic Resonance in Biophysical Chemistry, September 2007 (talk)

Web [BibTex]

Web [BibTex]


no image
Comparison of Adaptive Spatial Filters with Heuristic and Optimized Region of Interest for EEG Based Brain-Computer-Interfaces

Liefhold, C., Grosse-Wentrup, M., Gramann, K., Buss, M.

In Pattern Recognition, pages: 274-283, (Editors: Hamprecht, F. A., C. Schnörr, B. Jähne), Springer, Berlin, Germany, 29th Annual Symposium of the German Association for Pattern Recognition, September 2007 (inproceedings)

Abstract
Research on EEG based brain-computer-interfaces (BCIs) aims at steering devices by thought. Even for simple applications, BCIs require an extremely effective data processing to work properly because of the low signal-to-noise-ratio (SNR) of EEG signals. Spatial filtering is one successful preprocessing method, which extracts EEG components carrying the most relevant information. Unlike spatial filtering with Common Spatial Patterns (CSP), Adaptive Spatial Filtering (ASF) can be adapted to freely selectable regions of interest (ROI) and with this, artifacts can be actively suppressed. In this context, we compare the performance of ASF with ROIs selected using anatomical a-priori information and ASF with numerically optimized ROIs. Therefore, we introduce a method for data driven spatial filter adaptation and apply the achieved filters for classification of EEG data recorded during imaginary movements of the left and right hand of four subjects. The results show, that in the case of artifact-free datasets, ASFs with numerically optimized ROIs achieve classification rates of up to 97.7 % while ASFs with ROIs defined by anatomical heuristic stay at 93.7 % for the same data. Otherwise, with noisy datasets, the former brake down (66.7 %) while the latter meet 95.7 %.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Machine Learning and Applications in Biology

Shin, H.

In BioKorea 2007, pages: 337-366, BioKorea, September 2007 (inproceedings)

Abstract
The emergence of the fields of computational biology and bioinformatics has alleviated the burden of solving many biological problems, saving the time and cost required for experiments and also providing predictions that guide new experiments. Within computational biology, machine learning algorithms have played a central role in dealing with the flood of biological data. The goal of this tutorial is to raise awareness and comprehension of machine learning so that biologists can properly match the task at hand to the corresponding analytical approach. We start by categorizing biological problem settings and introduce the general machine learning schemes that fit best to each or these categories. We then explore representative models in further detail, from traditional statistical models to recent kernel models, presenting several up-to-date research projects in bioinfomatics to exemplify how biological questions can benefit from a machine learning approach. Finally, we discuss how cooperation between biologis ts and machine learners might be made smoother.

[BibTex]

[BibTex]


no image
Real-Time Fetal Heart Monitoring in Biomagnetic Measurements Using Adaptive Real-Time ICA

Waldert, S., Bensch, M., Bogdan, M., Rosenstiel, W., Schölkopf, B., Lowery, C., Eswaran, H., Preissl, H.

IEEE Transactions on Biomedical Engineering, 54(10):1867-1874, September 2007 (article)

Abstract
Electrophysiological signals of the developing fetal brain and heart can be investigated by fetal magnetoencephalography (fMEG). During such investigations, the fetal heart activity and that of the mother should be monitored continuously to provide an important indication of current well-being. Due to physical constraints of an fMEG system, it is not possible to use clinically established heart monitors for this purpose. Considering this constraint, we developed a real-time heart monitoring system for biomagnetic measurements and showed its reliability and applicability in research and for clinical examinations. The developed system consists of real-time access to fMEG data, an algorithm based on Independent Component Analysis (ICA), and a graphical user interface (GUI). The algorithm extracts the current fetal and maternal heart signal from a noisy and artifact-contaminated data stream in real-time and is able to adapt automatically to continuously varying environmental parameters. This algorithm has been na med Adaptive Real-time ICA (ARICA) and is applicable to real-time artifact removal as well as to related blind signal separation problems.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
A Nonparametric Approach to Bottom-Up Visual Saliency

Kienzle, W., Wichmann, F., Schölkopf, B., Franz, M.

In Advances in Neural Information Processing Systems 19, pages: 689-696, (Editors: B Schölkopf and J Platt and T Hofmann), MIT Press, Cambridge, MA, USA, 20th Annual Conference on Neural Information Processing Systems (NIPS), September 2007 (inproceedings)

Abstract
This paper addresses the bottom-up influence of local image information on human eye movements. Most existing computational models use a set of biologically plausible linear filters, e.g., Gabor or Difference-of-Gaussians filters as a front-end, the outputs of which are nonlinearly combined into a real number that indicates visual saliency. Unfortunately, this requires many design parameters such as the number, type, and size of the front-end filters, as well as the choice of nonlinearities, weighting and normalization schemes etc., for which biological plausibility cannot always be justified. As a result, these parameters have to be chosen in a more or less ad hoc way. Here, we propose to emph{learn} a visual saliency model directly from human eye movement data. The model is rather simplistic and essentially parameter-free, and therefore contrasts recent developments in the field that usually aim at higher prediction rates at the cost of additional parameters and increasing model complexity. Experimental results show that - despite the lack of any biological prior knowledge - our model performs comparably to existing approaches, and in fact learns image features that resemble findings from several previous studies. In particular, its maximally excitatory stimuli have center-surround structure, similar to receptive fields in the early human visual system.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Information Bottleneck for Non Co-Occurrence Data

Seldin, Y., Slonim, N., Tishby, N.

In Advances in Neural Information Processing Systems 19, pages: 1241-1248, (Editors: Schölkopf, B. , J. Platt, T. Hofmann), MIT Press, Cambridge, MA, USA, Twentieth Annual Conference on Neural Information Processing Systems (NIPS), September 2007 (inproceedings)

Abstract
We present a general model-independent approach to the analysis of data in cases when these data do not appear in the form of co-occurrence of two variables X, Y, but rather as a sample of values of an unknown (stochastic) function Z(X,Y). For example, in gene expression data, the expression level Z is a function of gene X and condition Y; or in movie ratings data the rating Z is a function of viewer X and movie Y . The approach represents a consistent extension of the Information Bottleneck method that has previously relied on the availability of co-occurrence statistics. By altering the relevance variable we eliminate the need in the sample of joint distribution of all input variables. This new formulation also enables simple MDL-like model complexity control and prediction of missing values of Z. The approach is analyzed and shown to be on a par with the best known clustering algorithms for a wide range of domains. For the prediction of missing values (collaborative filtering) it improves the currently best known results.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Learning with Hypergraphs: Clustering, Classification, and Embedding

Zhou, D., Huang, J., Schölkopf, B.

In Advances in Neural Information Processing Systems 19, pages: 1601-1608, (Editors: B Schölkopf and J Platt and T Hofmann), MIT Press, Cambridge, MA, USA, 20th Annual Conference on Neural Information Processing Systems (NIPS), September 2007 (inproceedings)

Abstract
We usually endow the investigated objects with pairwise relationships, which can be illustrated as graphs. In many real-world problems, however, relationships among the objects of our interest are more complex than pairwise. Naively squeezing the complex relationships into pairwise ones will inevitably lead to loss of information which can be expected valuable for our learning tasks however. Therefore we consider using hypergraphs instead to completely represent complex relationships among the objects of our interest, and thus the problem of learning with hypergraphs arises. Our main contribution in this paper is to generalize the powerful methodology of spectral clustering which originally operates on undirected graphs to hypergraphs, and further develop algorithms for hypergraph embedding and transductive classi¯cation on the basis of the spectral hypergraph clustering approach. Our experiments on a number of benchmarks showed the advantages of hypergraphs over usual graphs.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Classifying Event-Related Desynchronization in EEG, ECoG and MEG signals

Hill, N., Lal, T., Tangermann, M., Hinterberger, T., Widman, G., Elger, C., Schölkopf, B., Birbaumer, N.

In Toward Brain-Computer Interfacing, pages: 235-260, Neural Information Processing, (Editors: G Dornhege and J del R Millán and T Hinterberger and DJ McFarland and K-R Müller), MIT Press, Cambridge, MA, USA, September 2007 (inbook)

PDF Web [BibTex]

PDF Web [BibTex]


no image
Fast Computation of Graph Kernels

Vishwanathan, SVN., Borgwardt, KM., Schraudolph, N.

In Advances in Neural Information Processing Systems 19, pages: 1449-1456, (Editors: Schölkopf, B. , J. Platt, T. Hofmann), MIT Press, Cambridge, MA, USA, Twentieth Annual Conference on Neural Information Processing Systems (NIPS), September 2007 (inproceedings)

Abstract
Using extensions of linear algebra concepts to Reproducing Kernel Hilbert Spaces (RKHS), we define a unifying framework for random walk kernels on graphs. Reduction to a Sylvester equation allows us to compute many of these kernels in O(n3) worst-case time. This includes kernels whose previous worst-case time complexity was O(n6), such as the geometric kernels of G¨artner et al. [1] and the marginal graph kernels of Kashima et al. [2]. Our algebra in RKHS allow us to exploit sparsity in directed and undirected graphs more effectively than previous methods, yielding sub-cubic computational complexity when combined with conjugate gradient solvers or fixed-point iterations. Experiments on graphs from bioinformatics and other application domains show that our algorithms are often more than 1000 times faster than existing approaches.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Joint Kernel Maps

Weston, J., Bakir, G., Bousquet, O., Mann, T., Noble, W., Schölkopf, B.

In Predicting Structured Data, pages: 67-84, Advances in neural information processing systems, (Editors: GH Bakir and T Hofmann and B Schölkopf and AJ Smola and B Taskar and SVN Vishwanathan), MIT Press, Cambridge, MA, USA, September 2007 (inbook)

Web [BibTex]

Web [BibTex]


no image
Correcting Sample Selection Bias by Unlabeled Data

Huang, J., Smola, A., Gretton, A., Borgwardt, K., Schölkopf, B.

In Advances in Neural Information Processing Systems 19, pages: 601-608, (Editors: B Schölkopf and J Platt and T Hofmann), MIT Press, Cambridge, MA, USA, 20th Annual Conference on Neural Information Processing Systems (NIPS), September 2007 (inproceedings)

Abstract
We consider the scenario where training and test data are drawn from different distributions, commonly referred to as sample selection bias. Most algorithms for this setting try to first recover sampling distributions and then make appropriate corrections based on the distribution estimate. We present a nonparametric method which directly produces resampling weights without distribution estimation. Our method works by matching distributions between training and testing sets in feature space. Experimental results demonstrate that our method works well in practice.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Brain-Computer Interfaces for Communication in Paralysis: A Clinical Experimental Approach

Hinterberger, T., Nijboer, F., Kübler, A., Matuz, T., Furdea, A., Mochty, U., Jordan, M., Lal, T., Hill, J., Mellinger, J., Bensch, M., Tangermann, M., Widman, G., Elger, C., Rosenstiel, W., Schölkopf, B., Birbaumer, N.

In Toward Brain-Computer Interfacing, pages: 43-64, Neural Information Processing, (Editors: G. Dornhege and J del R Millán and T Hinterberger and DJ McFarland and K-R Müller), MIT Press, Cambridge, MA, USA, September 2007 (inbook)

PDF Web [BibTex]

PDF Web [BibTex]


no image
Near-Maximum Entropy Models for Binary Neural Representations of Natural Images

Berens, P., Bethge, M.

Neural Coding, Computation and Dynamics (NCCD 07), 1, pages: 19, September 2007 (poster)

Abstract
Maximum entropy analysis of binary variables provides an elegant way for studying the role of pairwise correlations in neural populations. Unfortunately, these approaches suffer from their poor scalability to high dimensions. In sensory coding, however, high-dimensional data is ubiquitous. Here, we introduce a new approach using a near-maximum entropy model, that makes this type of analysis feasible for very high-dimensional data---the model parameters can be derived in closed form and sampling is easy. We demonstrate its usefulness by studying a simple neural representation model of natural images. For the first time, we are able to directly compare predictions from a pairwise maximum entropy model not only in small groups of neurons, but also in larger populations of more than thousand units. Our results indicate that in such larger networks interactions exist that are not predicted by pairwise correlations, despite the fact that pairwise correlations explain the lower-dimensional marginal statistics extrem ely well up to the limit of dimensionality where estimation of the full joint distribution is feasible.

PDF [BibTex]

PDF [BibTex]


no image
Sparse Multiscale Gaussian Process Regression

Walder, C., Kim, K., Schölkopf, B.

(162), Max Planck Institute for Biological Cybernetics, Tübingen, Germany, August 2007 (techreport)

Abstract
Most existing sparse Gaussian process (g.p.) models seek computational advantages by basing their computations on a set of m basis functions that are the covariance function of the g.p. with one of its two inputs fixed. We generalise this for the case of Gaussian covariance function, by basing our computations on m Gaussian basis functions with arbitrary diagonal covariance matrices (or length scales). For a fixed number of basis functions and any given criteria, this additional flexibility permits approximations no worse and typically better than was previously possible. Although we focus on g.p. regression, the central idea is applicable to all kernel based algorithms, such as the support vector machine. We perform gradient based optimisation of the marginal likelihood, which costs O(m2n) time where n is the number of data points, and compare the method to various other sparse g.p. methods. Our approach outperforms the other methods, particularly for the case of very few basis functions, i.e. a very high sparsity ratio.

PDF [BibTex]

PDF [BibTex]


no image
Overcomplete Independent Component Analysis via Linearly Constrained Minimum Variance Spatial Filtering

Grosse-Wentrup, M., Buss, M.

Journal of VLSI Signal Processing, 48(1-2):161-171, August 2007 (article)

Abstract
Independent Component Analysis (ICA) designed for complete bases is used in a variety of applications with great success, despite the often questionable assumption of having N sensors and M sources with N&#8805;M. In this article, we assume a source model with more sources than sensors (M>N), only L<N of which are assumed to have a non-Gaussian distribution. We argue that this is a realistic source model for a variety of applications, and prove that for ICA algorithms designed for complete bases (i.e., algorithms assuming N=M) based on mutual information the mixture coefficients of the L non-Gaussian sources can be reconstructed in spite of the overcomplete mixture model. Further, it is shown that the reconstructed temporal activity of non-Gaussian sources is arbitrarily mixed with Gaussian sources. To obtain estimates of the temporal activity of the non-Gaussian sources, we use the correctly reconstructed mixture coefficients in conjunction with linearly constrained minimum variance spatial filtering. This results in estimates of the non-Gaussian sources minimizing the variance of the interference of other sources. The approach is applied to the denoising of Event Related Fields recorded by MEG, and it is shown that it performs superiorly to ordinary ICA.

PDF PDF DOI [BibTex]


no image
Efficient Subwindow Search for Object Localization

Blaschko, M., Hofmann, T., Lampert, C.

(164), Max Planck Institute for Biological Cybernetics, Tübingen, Germany, August 2007 (techreport)

Abstract
Recent years have seen huge advances in object recognition from images. Recognition rates beyond 95% are the rule rather than the exception on many datasets. However, most state-of-the-art methods can only decide if an object is present or not. They are not able to provide information on the object location or extent within in the image. We report on a simple yet powerful scheme that extends many existing recognition methods to also perform localization of object bounding boxes. This is achieved by maximizing the classification score over all possible subrectangles in the image. Despite the impression that this would be computationally intractable, we show that in many situations efficient algorithms exist which solve a generalized maximum subrectangle problem. We show how our method is applicable to a variety object detection frameworks and demonstrate its performance by applying it to the popular bag of visual words model, achieving competitive results on the PASCAL VOC 2006 dataset.

PDF [BibTex]

PDF [BibTex]


no image
Collaborative Filtering via Ensembles of Matrix Factorizations

Wu, M.

In KDD Cup and Workshop 2007, pages: 43-47, KDD Cup and Workshop, August 2007 (inproceedings)

Abstract
We present a Matrix Factorization(MF) based approach for the Netflix Prize competition. Currently MF based algorithms are popular and have proved successful for collaborative filtering tasks. For the Netflix Prize competition, we adopt three different types of MF algorithms: regularized MF, maximum margin MF and non-negative MF. Furthermore, for each MF algorithm, instead of selecting the optimal parameters, we combine the results obtained with several parameters. With this method, we achieve a performance that is more than 6% better than the Netflix&amp;lsquo;s own system.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Thinking Out Loud: Research and Development of Brain Computer Interfaces

Hill, NJ.

Invited keynote talk at the Max Planck Society‘s PhDNet Workshop., July 2007 (talk)

Abstract
My principal interest is in applying machine-learning methods to the development of Brain-Computer Interfaces (BCI). This involves the classification of a user‘s intentions or mental states, or regression against some continuous intentional control signal, using brain signals obtained for example by EEG, ECoG or MEG. The long-term aim is to develop systems that a completely paralysed person (such as someone suffering from advanced Amyotrophic Lateral Sclerosis) could use to communicate. Such systems have the potential to improve the lives of many people who would be otherwise completely unable to communicate, but they are still very much in the research and development stages.

PDF [BibTex]

PDF [BibTex]


no image
Fusion of spectral and spatial information by a novel SVM classification technique

Bruzzone, L., Marconcini, M., Persello, C.

In pages: 4838-4841 , IEEE, Piscataway, NJ, USA, IEEE International Geoscience and Remote Sensing Symposium (IGARSS), July 2007 (inproceedings)

Abstract
A novel context-sensitive semisupervised classification technique based on support vector machines is proposed. This technique aims at exploiting the SVM method for image classification by properly fusing spectral information with spatial- context information. This results in: i) an increased robustness to noisy training sets in the learning phase of the classifier; ii) a higher and more stable classification accuracy with respect to the specific patterns included in the training set; and iii) a regularized classification map. The main property of the proposed context sensitive semisupervised SVM (CS4VM) is to adaptively exploit the contextual information in the training phase of the classifier, without any critical assumption on the expected labels of the pixels included in the same neighborhood system. This is done by defining a novel context-sensitive term in the objective function used in the learning of the classifier. In addition, the proposed CS4VM can be integrated with a Markov random field (MRF) approach for exploiting the contextual information also to regularize the classification map. Experiments carried out on very high geometrical resolution images confirmed the effectiveness of the proposed technique.

Web DOI [BibTex]

Web DOI [BibTex]


no image
Online-Computation Approach to Optimal Control of Noise-Affected Nonlinear Systems with Continuous State and Control Spaces

Deisenroth, MP., Weissel, F., Ohtsuka, T., Hanebeck, UD.

In ECC‘07, pages: 3664-3671, 9th European Control Conference, July 2007 (inproceedings)

Abstract
A novel online-computation approach to optimal control of nonlinear, noise-affected systems with continuous state and control spaces is presented. In the proposed algorithm, system noise is explicitly incorporated into the control decision. This leads to superior results compared to state-of-the-art nonlinear controllers that neglect this influence. The solution of an optimal nonlinear controller for a corresponding deterministic system is employed to find a meaningful state space restriction. This restriction is obtained by means of approximate state prediction using the noisy system equation. Within this constrained state space, an optimal closed-loop solution for a finite decision-making horizon (prediction horizon) is determined within an adaptively restricted optimization space. Interleaving stochastic dynamic programming and value function approximation yields a solution to the considered optimal control problem. The enhanced performance of the proposed discrete-time controller is illustrated by means o f a scalar example system. Nonlinear model predictive control is applied to address approximate treatment of infinite-horizon problems by the finite-horizon controller.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Error Correcting Codes for the P300 Visual Speller

Biessmann, F.

Biologische Kybernetik, Eberhard-Karls-Universität Tübingen, Tübingen, Germany, July 2007 (diplomathesis)

Abstract
The aim of brain-computer interface (BCI) research is to establish a communication system based on intentional modulation of brain activity. This is accomplished by classifying patterns of brain ac- tivity, volitionally induced by the user. The BCI presented in this study is based on a classical paradigm as proposed by (Farwell and Donchin, 1988), the P300 visual speller. Recording electroencephalo- grams (EEG) from the scalp while presenting letters successively to the user, the speller can infer from the brain signal which letter the user was focussing on. Since EEG recordings are noisy, usually many repetitions are needed to detect the correct letter. The focus of this study was to improve the accuracy of the visual speller applying some basic principles from information theory: Stimulus sequences of the speller have been modi&amp;amp;amp;#64257;ed into error-correcting codes. Additionally a language model was incorporated into the probabilistic letter de- coder. Classi&amp;amp;amp;#64257;cation of single EEG epochs was less accurate using error correcting codes. However, the novel code could compensate for that such that overall, letter accuracies were as high as or even higher than for classical stimulus codes. In particular at high noise levels, error-correcting decoding achieved higher letter accuracies.

PDF [BibTex]

PDF [BibTex]


no image
Feature Selection for Trouble Shooting in Complex Assembly Lines

Pfingsten, T., Herrmann, D., Schnitzler, T., Feustel, A., Schölkopf, B.

IEEE Transactions on Automation Science and Engineering, 4(3):465-469, July 2007 (article)

Abstract
The final properties of sophisticated products can be affected by many unapparent dependencies within the manufacturing process, and the products’ integrity can often only be checked in a final measurement. Troubleshooting can therefore be very tedious if not impossible in large assembly lines. In this paper we show that Feature Selection is an efficient tool for serial-grouped lines to reveal causes for irregularities in product attributes. We compare the performance of several methods for Feature Selection on real-world problems in mass-production of semiconductor devices. Note to Practitioners— We present a data based procedure to localize flaws in large production lines: using the results of final quality inspections and information about which machines processed which batches, we are able to identify machines which cause low yield.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
A Kernel Approach to Comparing Distributions

Gretton, A., Borgwardt, K., Rasch, M., Schölkopf, B., Smola, A.

In Proceedings of the 22. AAAI Conference on Artificial Intelligence, pages: 1637-1641, AAAI Press, Menlo Park, CA, USA, Twenty-Second AAAI Conference on Artificial Intelligence (AAAI), July 2007 (inproceedings)

Abstract
We describe a technique for comparing distributions without the need for density estimation as an intermediate step. Our approach relies on mapping the distributions into a Reproducing Kernel Hilbert Space. We apply this technique to construct a two-sample test, which is used for determining whether two sets of observations arise from the same distribution. We use this test in attribute matching for databases using the Hungarian marriage method, where it performs strongly. We also demonstrate excellent performance when comparing distributions over graphs, for which no alternative tests currently exist.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Gene selection via the BAHSIC family of algorithms

Song, L., Bedo, J., Borgwardt, K., Gretton, A., Smola, A.

Bioinformatics, 23(13: ISMB/ECCB 2007 Conference Proceedings):i490-i498, July 2007 (article)

Abstract
Motivation: Identifying significant genes among thousands of sequences on a microarray is a central challenge for cancer research in bioinformatics. The ultimate goal is to detect the genes that are involved in disease outbreak and progression. A multitude of methods have been proposed for this task of feature selection, yet the selected gene lists differ greatly between different methods. To accomplish biologically meaningful gene selection from microarray data, we have to understand the theoretical connections and the differences between these methods. In this article, we define a kernel-based framework for feature selection based on the Hilbert–Schmidt independence criterion and backward elimination, called BAHSIC. We show that several well-known feature selectors are instances of BAHSIC, thereby clarifying their relationship. Furthermore, by choosing a different kernel, BAHSIC allows us to easily define novel feature selection algorithms. As a further advantage, feature selection via BAHSIC works directly on multiclass problems. Results: In a broad experimental evaluation, the members of the BAHSIC family reach high levels of accuracy and robustness when compared to other feature selection techniques. Experiments show that features selected with a linear kernel provide the best classification performance in general, but if strong non-linearities are present in the data then non-linear kernels can be more suitable.

Web DOI [BibTex]

Web DOI [BibTex]


no image
Phenotyping of Chondrocytes In Vivo and In Vitro Using cDNA Array Technology

Zien, A., Gebhard, P., Fundel, K., Aigner, T.

Clinical Orthopaedics and Related Research, 460, pages: 226-233, July 2007 (article)

Abstract
The cDNA array technology is a powerful tool to analyze a high number of genes in parallel. We investigated whether large-scale gene expression analysis allows clustering and identification of cellular phenotypes of chondrocytes in different in vivo and in vitro conditions. In 100% of cases, clustering analysis distinguished between in vivo and in vitro samples, suggesting fundamental differences in chondrocytes in situ and in vitro regardless of the culture conditions or disease status. It also allowed us to differentiate between healthy and osteoarthritic cartilage. The clustering also revealed the relative importance of the investigated culturing conditions (stimulation agent, stimulation time, bead/monolayer). We augmented the cluster analysis with a statistical search for genes showing differential expression. The identified genes provided hints to the molecular basis of the differences between the sample classes. Our approach shows the power of modern bioinformatic algorithms for understanding and class ifying chondrocytic phenotypes in vivo and in vitro. Although it does not generate new experimental data per se, it provides valuable information regarding the biology of chondrocytes and may provide tools for diagnosing and staging the osteoarthritic disease process.

DOI [BibTex]

DOI [BibTex]


no image
Manifold Denoising as Preprocessing for Finding Natural Representations of Data

Hein, M., Maier, M.

In AAAI-07, pages: 1646-1649, AAAI Press, Menlo Park, CA, USA, Twenty-Second AAAI Conference on Artificial Intelligence (AAAI-07), July 2007 (inproceedings)

Abstract
A natural representation of data are the parameters which generated the data. If the parameter space is continuous we can regard it as a manifold. In practice we usually do not know this manifold but we just have some representation of the data, often in a very high-dimensional feature space. Since the number of internal parameters does not change with the representation, the data will effectively lie on a low-dimensional submanifold in feature space. Due to measurement errors this data is usually corrupted by noise which particularly in high-dimensional feature spaces makes it almost impossible to find the manifold structure. This paper reviews a method called Manifold Denoising which projects the data onto the submanifold using a diffusion process on a graph generated by the data. We will demonstrate that the method is capable of dealing with non-trival high-dimensional noise. Moreover we will show that using the method as a preprocessing step one can significantly improve the results of a semi-supervised learning algorithm.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Learning the Influence of Spatio-Temporal Variations in Local Image Structure on Visual Saliency

Kienzle, W., Wichmann, F., Schölkopf, B., Franz, M.

10th T{\"u}binger Wahrnehmungskonferenz (TWK 2007), 10, pages: 1, July 2007 (poster)

Abstract
Computational models for bottom-up visual attention traditionally consist of a bank of Gabor-like or Difference-of-Gaussians filters and a nonlinear combination scheme which combines the filter responses into a real-valued saliency measure [1]. Recently it was shown that a standard machine learning algorithm can be used to derive a saliency model from human eye movement data with a very small number of additional assumptions. The learned model is much simpler than previous models, but nevertheless has state-of-the-art prediction performance [2]. A central result from this study is that DoG-like center-surround filters emerge as the unique solution to optimizing the predictivity of the model. Here we extend the learning method to the temporal domain. While the previous model [2] predicts visual saliency based on local pixel intensities in a static image, our model also takes into account temporal intensity variations. We find that the learned model responds strongly to temporal intensity changes ocurring 200-250ms before a saccade is initiated. This delay coincides with the typical saccadic latencies, indicating that the learning algorithm has extracted a meaningful statistic from the training data. In addition, we show that the model correctly predicts a significant proportion of human eye movements on previously unseen test data.

Web [BibTex]

Web [BibTex]


no image
Graph-based Protein Functional Classification

Shin, H., Lisewski, A., Lichtarge, O.

In BIOCOMP‘07, pages: 738-744, (Editors: Arabnia, H. R., M. Q. Yang, J. Y. Yang), CSREA Press, Las Vegas, NV, USA, 2007 International Conference on Bioinformatics and Computational Biology, July 2007 (inproceedings)

Web [BibTex]

Web [BibTex]


no image
Data-driven goodness-of-fit tests

Langovoy, MA.

Biologische Kybernetik, Georg-August-Universität Göttingen, Göttingen, Germany, July 2007 (phdthesis)

Web [BibTex]

Web [BibTex]


no image
Common Sequence Polymorphisms Shaping Genetic Diversity in Arabidopsis thaliana

Clark, R., Schweikert, G., Toomajian, C., Ossowski, S., Zeller, G., Shinn, P., Warthmann, N., Hu, T., Fu, G., Hinds, D., Chen, H., Frazer, K., Huson, D., Schölkopf, B., Nordborg, M., Rätsch, G., Ecker, J., Weigel, D.

Science, 317(5836):338-342, July 2007 (article)

Abstract
The genomes of individuals from the same species vary in sequence as a result of different evolutionary processes. To examine the patterns of, and the forces shaping, sequence variation in Arabidopsis thaliana, we performed high-density array resequencing of 20 diverse strains (accessions). More than 1 million nonredundant single-nucleotide polymorphisms (SNPs) were identified at moderate false discovery rates (FDRs), and ~4% of the genome was identified as being highly dissimilar or deleted relative to the reference genome sequence. Patterns of polymorphism are highly nonrandom among gene families, with genes mediating interaction with the biotic environment having exceptional polymorphism levels. At the chromosomal scale, regional variation in polymorphism was readily apparent. A scan for recent selective sweeps revealed several candidate regions, including a notable example in which almost all variation was removed in a 500-kilobase window. Analyzing the polymorphisms we describe in larger sets of accessions will enable a detailed understanding of forces shaping population-wide sequence variation in A. thaliana.

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Supervised Feature Selection via Dependence Estimation

Song, L., Smola, A., Gretton, A., Borgwardt, K., Bedo, J.

In Proceedings of the 24th Annual International Conference on Machine Learning (ICML 2007), pages: 823-830, (Editors: Ghahramani, Z. ), ACM Press, New York, NY, USA, Twenty-Fourth Annual International Conference on Machine Learning (ICML), June 2007 (inproceedings)

Abstract
We introduce a framework for filtering features that employs the Hilbert-Schmidt Independence Criterion (HSIC) as a measure of dependence between the features and the labels. The key idea is that good features should maximise such dependence. Feature selection for various supervised learning problems (including classification and regression) is unified under this framework, and the solutions can be approximated using a backward-elimination algorithm. We demonstrate the usefulness of our method on both artificial and real world datasets.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
A Kernel-Based Causal Learning Algorithm

Sun, X., Janzing, D., Schölkopf, B., Fukumizu, K.

In Proceedings of the 24th International Conference on Machine Learning, pages: 855-862, (Editors: Z Ghahramani), ACM Press, New York, NY, USA, ICML, June 2007 (inproceedings)

Abstract
We describe a causal learning method, which employs measuring the strength of statistical dependences in terms of the Hilbert-Schmidt norm of kernel-based cross-covariance operators. Following the line of the common faithfulness assumption of constraint-based causal learning, our approach assumes that a variable Z is likely to be a common effect of X and Y, if conditioning on Z increases the dependence between X and Y. Based on this assumption, we collect "votes" for hypothetical causal directions and orient the edges by the majority principle. In most experiments with known causal structures, our method provided plausible results and outperformed the conventional constraint-based PC algorithm.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Entire Regularization Paths for Graph Data

Tsuda, K.

In ICML 2007, pages: 919-926, (Editors: Ghahramani, Z. ), ACM Press, New York, NY, USA, 24th Annual International Conference on Machine Learning, June 2007 (inproceedings)

Abstract
Graph data such as chemical compounds and XML documents are getting more common in many application domains. A main difficulty of graph data processing lies in the intrinsic high dimensionality of graphs, namely, when a graph is represented as a binary feature vector of indicators of all possible subgraph patterns, the dimensionality gets too large for usual statistical methods. We propose an efficient method to select a small number of salient patterns by regularization path tracking. The generation of useless patterns is minimized by progressive extension of the search space. In experiments, it is shown that our technique is considerably more efficient than a simpler approach based on frequent substructure mining.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Les Représentations Prédictives des États et des Politiques

Boularias, A., Chaib-Draa, B.

In MFI 2007, pages: 37-48, Quatrièmes Journées Francophones Modèles Formels de l‘Interaction, June 2007 (inproceedings)

Abstract
Nous proposons dans cet article une nouvelle approche pour représenter les politiques (stratégies) dans les environnements stochastiques et partiellement observables. Nous nous intéressons plus particulièrement aux systèmes multi-agents, où chaque agent connaît uniquement ses propres politiques, et doit choisir la meilleure parmi elles selon son état de croyance sur les politiques du reste des agents. Notre modèle utilise moins de paramètres que les méthodes de représentation usuelles, telles que les arbres de décision ou les contrôleurs d’états finis stochastiques, permettant ainsi une accélération des algorithmes de planification. Nous montrons aussi comment ce modèle peut être utilisé efficacement dans le cas de la planification multiagents coopérative et sans communication, les résultats empiriques sont comparés avec le modèle DEC-POMDP (Decentralized Partially Observable Markov Decision Process).

PDF Web [BibTex]

PDF Web [BibTex]


no image
Graph Laplacians and their Convergence on Random Neighborhood Graphs

Hein, M., Audibert, J., von Luxburg, U.

Journal of Machine Learning Research, 8, pages: 1325-1370, June 2007 (article)

Abstract
Given a sample from a probability measure with support on a submanifold in Euclidean space one can construct a neighborhood graph which can be seen as an approximation of the submanifold. The graph Laplacian of such a graph is used in several machine learning methods like semi-supervised learning, dimensionality reduction and clustering. In this paper we determine the pointwise limit of three different graph Laplacians used in the literature as the sample size increases and the neighborhood size approaches zero. We show that for a uniform measure on the submanifold all graph Laplacians have the same limit up to constants. However in the case of a non-uniform measure on the submanifold only the so called random walk graph Laplacian converges to the weighted Laplace-Beltrami operator.

PDF PDF [BibTex]

PDF PDF [BibTex]


no image
Dirichlet Process Mixtures of Factor Analysers

Görür, D., Rasmussen, C.

Fifth Workshop on Bayesian Inference in Stochastic Processes (BSP5), June 2007 (talk)

Abstract
Mixture of factor analysers (MFA) is a well-known model that combines the dimensionality reduction technique of Factor Analysis (FA) with mixture modeling. The key issue in MFA is deciding on the latent dimension and the number of mixture components to be used. The Bayesian treatment of MFA has been considered by Beal and Ghahramani (2000) using variational approximation and by Fokoué and Titterington (2003) using birth-and –death Markov chain Monte Carlo (MCMC). Here, we present the nonparametric MFA model utilizing a Dirichlet process (DP) prior on the component parameters (that is, the factor loading matrix and the mean vector of each component) and describe an MCMC scheme for inference. The clustering property of the DP provides automatic selection of the number of mixture components. The latent dimensionality of each component is inferred by automatic relevance determination (ARD). Identifying the action potentials of individual neurons from extracellular recordings, known as spike sorting, is a challenging clustering problem. We apply our model for clustering the waveforms recorded from the cortex of a macaque monkey.

Web [BibTex]

Web [BibTex]


no image
An Extensible Probabilistic Transformation-based Approach to the Third Recognizing Textual Entailment Challenge

Harmeling, S.

In TextEntail 2007, pages: 137-142, ACL-PASCAL Workshop on Textual Entailment and Paraphrasing, June 2007 (inproceedings)

Abstract
We introduce a system for textual entailment that is based on a probabilistic model of entailment. The model is defined using some calculus of transformations on dependency trees, which is characterized by the fact that derivations in that calculus preserve the truth only with a certain probability. We also describe a possible set of transformations (and with it implicitly a calculus) that was successfully applied to the RTE3 challenge data. However, our system can be improved in many ways and we see it as the starting point for a promising new approach to textual entailment.

Web [BibTex]

Web [BibTex]


no image
Weighted Substructure Mining for Image Analysis

Nowozin, S., Tsuda, K., Uno, T., Kudo, T., BakIr, G.

In CVPR 2007, pages: 1-8, IEEE Computer Society, Los Alamitos, CA, USA, 2007 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, June 2007 (inproceedings)

Abstract
In web-related applications of image categorization, it is desirable to derive an interpretable classification rule with high accuracy. Using the bag-of-words representation and the linear support vector machine, one can partly fulfill the goal, but the accuracy of linear classifiers is not high and the obtained features are not informative for users. We propose to combine item set mining and large margin classifiers to select features from the power set of all visual words. Our resulting classification rule is easier to browse and simpler to understand, because each feature has richer information. As a next step, each image is represented as a graph where nodes correspond to local image features and edges encode geometric relations between features. Combining graph mining and boosting, we can obtain a classification rule based on subgraph features that contain more information than the set features. We evaluate our algorithm in a web-retrieval ranking task where the goal is to reject outliers from a set of images returned for a keyword query. Furthermore, it is evaluated on the supervised classification tasks with the challenging VOC2005 data set. Our approach yields excellent accuracy in the unsupervised ranking task compared to a recently proposed probabilistic model and competitive results in the supervised classification task.

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Local Learning Projections

Wu, M., Yu, K., Yu, S., Schölkopf, B.

In Proceedings of the 24th International Conference on Machine Learning, pages: 1039-1046, (Editors: Z Ghahramani), ACM Press, New York, NY, USA, ICML, June 2007 (inproceedings)

Abstract
This paper presents a Local Learning Projection (LLP) approach for linear dimensionality reduction. We first point out that the well known Principal Component Analysis (PCA) essentially seeks the projection that has the minimal global estimation error. Then we propose a dimensionality reduction algorithm that leads to the projection with the minimal local estimation error, and elucidate its advantages for classification tasks. We also indicate that LLP keeps the local information in the sense that the projection value of each point can be well estimated based on its neighbors and their projection values. Experimental results are provided to validate the effectiveness of the proposed algorithm.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Training and Approximation of a Primal Multiclass Support Vector Machine

Zien, A., Bona, F., Ong, C.

In ASMDA 2007, pages: 1-8, (Editors: Skiadas, C. H.), 12th International Conference on Applied Stochastic Models and Data Analysis, June 2007 (inproceedings)

Abstract
We revisit the multiclass support vector machine (SVM) and generalize the formulation to convex loss functions and joint feature maps. Motivated by recent work [Chapelle, 2006] we use logistic loss and softmax to enable gradient based primal optimization. Kernels are incorporated via kernel principal component analysis (KPCA), which naturally leads to approximation methods for large scale problems. We investigate similarities and differences to previous multiclass SVM approaches. Experimental comparisons to previous approaches and to the popular one-vs-rest SVM are presented on several different datasets.

PDF PostScript Web [BibTex]

PDF PostScript Web [BibTex]


no image
New BCI approaches: Selective Attention to Auditory and Tactile Stimulus Streams

Hill, N., Raths, C.

Invited talk at the PASCAL Workshop on Methods of Data Analysis in Computational Neuroscience and Brain Computer Interfaces, June 2007 (talk)

Abstract
When considering Brain-Computer Interface (BCI) development for patients in the most severely paralysed states, there is considerable motivation to move away from BCI systems based on either motor cortex activity, or on visual stimuli. Together these account for most of current BCI research. I present the results of our recent exploration of new auditory- and tactile-stimulus-driven BCIs. The talk includes a tutorial on the construction and interpretation of classifiers which extract spatio-temporal features from event-related potential data. The effects and implications of whitening are discussed, and preliminary results on the effectiveness of a low-rank constraint (Tomioka and Aihara 2007) are shown.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Towards Motor Skill Learning in Robotics

Peters, J.

Interactive Robot Learning - RSS workshop, June 2007 (talk)

Web [BibTex]

Web [BibTex]


no image
Nonlinear independent component analysis with minimum nonlinear distortion

Zhang, K., Chan, L.

In ICML ’07: Proceedings of the 24th international conference on Machine learning, pages: 1127-1134, (Editors: Z Ghahramani), ACM, New York, NY, USA, 24th International Conference on Machine Learning (ICML), June 2007 (inproceedings)

PDF [BibTex]

PDF [BibTex]