Header logo is ei


2008


no image

no image
Information Consistency of Nonparametric Gaussian Process Methods

Seeger, MW., Kakade, SM., Foster, DP.

IEEE Transactions on Information Theory, 54(5):2376-2382, May 2008 (article)

Abstract
Abstract—Bayesian nonparametric models are widely and successfully used for statistical prediction. While posterior consistency properties are well studied in quite general settings, results have been proved using abstract concepts such as metric entropy, and they come with subtle conditions which are hard to validate and not intuitive when applied to concrete models. Furthermore, convergence rates are difficult to obtain. By focussing on the concept of information consistency for Bayesian Gaussian process (GP)models, consistency results and convergence rates are obtained via a regret bound on cumulative log loss. These results depend strongly on the covariance function of the prior process, thereby giving a novel interpretation to penalization with reproducing kernel Hilbert space norms and to commonly used covariance function classes and their parameters. The proof of the main result employs elementary convexity arguments only. A theorem of Widom is used in order to obtain precise convergence rates for several covariance functions widely used in practice.

Web DOI [BibTex]

Web DOI [BibTex]


no image
New Frontiers in Characterizing Structure and Dynamics by NMR

Nilges, M., Markwick, P., Malliavin, TE., Rieping, W., Habeck, M.

In Computational Structural Biology: Methods and Applications, pages: 655-680, (Editors: Schwede, T. , M. C. Peitsch), World Scientific, New Jersey, NJ, USA, May 2008 (inbook)

Abstract
Nuclear Magnetic Resonance (NMR) spectroscopy has emerged as the method of choice for studying both the structure and the dynamics of biological macromolecule in solution. Despite the maturity of the NMR method for structure determination, its application faces a number of challenges. The method is limited to systems of relatively small molecular mass, data collection times are long, data analysis remains a lengthy procedure, and it is difficult to evaluate the quality of the final structures. The last years have seen significant advances in experimental techniques to overcome or reduce some limitations. The function of bio-macromolecules is determined by both their 3D structure and conformational dynamics. These molecules are inherently flexible systems displaying a broad range of dynamics on time–scales from picoseconds to seconds. NMR is unique in its ability to obtain dynamic information on an atomic scale. The experimental information on structure and dynamics is intricately mixed. It is however difficult to unite both structural and dynamical information into one consistent model, and protocols for the determination of structure and dynamics are performed independently. This chapter deals with the challenges posed by the interpretation of NMR data on structure and dynamics. We will first relate the standard structure calculation methods to Bayesian probability theory. We will then briefly describe the advantages of a fully Bayesian treatment of structure calculation. Then, we will illustrate the advantages of using Bayesian reasoning at least partly in standard structure calculations. The final part will be devoted to interpretation of experimental data on dynamics.

Web [BibTex]

Web [BibTex]


no image
Relating the Thermodynamic Arrow of Time to the Causal Arrow

Allahverdyan, A., Janzing, D.

Journal of Statistical Mechanics, 2008(P04001):1-21, April 2008 (article)

Abstract
Consider a Hamiltonian system that consists of a slow subsystem S and a fast subsystem F. The autonomous dynamics of S is driven by an effective Hamiltonian, but its thermodynamics is unexpected. We show that a well-defined thermodynamic arrow of time (second law) emerges for S whenever there is a well-defined causal arrow from S to F and the back-action is negligible. This is because the back-action of F on S is described by a non-globally Hamiltonian Born–Oppenheimer term that violates the Liouville theorem, and makes the second law inapplicable to S. If S and F are mixing, under the causal arrow condition they are described by microcanonical distributions P(S) and P(S|F). Their structure supports a causal inference principle proposed recently in machine learning.

Web DOI [BibTex]

Web DOI [BibTex]


no image
Generalization and Similarity in Exemplar Models of Categorization: Insights from Machine Learning

Jäkel, F., Schölkopf, B., Wichmann, F.

Psychonomic Bulletin and Review, 15(2):256-271, April 2008 (article)

Abstract
Exemplar theories of categorization depend on similarity for explaining subjects’ ability to generalize to new stimuli. A major criticism of exemplar theories concerns their lack of abstraction mechanisms and thus, seemingly, generalization ability. Here, we use insights from machine learning to demonstrate that exemplar models can actually generalize very well. Kernel methods in machine learning are akin to exemplar models and very successful in real-world applications. Their generalization performance depends crucially on the chosen similaritymeasure. While similarity plays an important role in describing generalization behavior it is not the only factor that controls generalization performance. In machine learning, kernel methods are often combined with regularization techniques to ensure good generalization. These same techniques are easily incorporated in exemplar models. We show that the Generalized Context Model (Nosofsky, 1986) and ALCOVE (Kruschke, 1992) are closely related to a statistical model called kernel logistic regression. We argue that generalization is central to the enterprise of understanding categorization behavior and suggest how insights from machine learning can offer some guidance. Keywords: kernel, similarity, regularization, generalization, categorization.

PDF Web DOI [BibTex]


no image
Data-driven efficient score tests for deconvolution hypotheses

Langovoy, M.

Inverse Problems, 24(2):1-17, April 2008 (article)

Abstract
We consider testing statistical hypotheses about densities of signals in deconvolution models. A new approach to this problem is proposed. We constructed score tests for the deconvolution density testing with the known noise density and efficient score tests for the case of unknown density. The tests are incorporated with model selection rules to choose reasonable model dimensions automatically by the data. Consistency of the tests is proved.

PDF DOI [BibTex]


no image
Manifold-valued Thin-plate Splines with Applications in Computer Graphics

Steinke, F., Hein, M., Peters, J., Schölkopf, B.

Computer Graphics Forum, 27(2):437-448, April 2008 (article)

Abstract
We present a generalization of thin-plate splines for interpolation and approximation of manifold-valued data, and demonstrate its usefulness in computer graphics with several applications from different fields. The cornerstone of our theoretical framework is an energy functional for mappings between two Riemannian manifolds which is independent of parametrization and respects the geometry of both manifolds. If the manifolds are Euclidean, the energy functional reduces to the classical thin-plate spline energy. We show how the resulting optimization problems can be solved efficiently in many cases. Our example applications range from orientation interpolation and motion planning in animation over geometric modelling tasks to color interpolation.

PDF AVI Web DOI [BibTex]


no image
The Metric Nearness Problem

Brickell, J., Dhillon, I., Sra, S., Tropp, J.

SIAM Journal on Matrix Analysis and Applications, 30(1):375-396, April 2008 (article)

Abstract
Metric nearness refers to the problem of optimally restoring metric properties to distance measurements that happen to be nonmetric due to measurement errors or otherwise. Metric data can be important in various settings, for example, in clustering, classification, metric-based indexing, query processing, and graph theoretic approximation algorithms. This paper formulates and solves the metric nearness problem: Given a set of pairwise dissimilarities, find a “nearest” set of distances that satisfy the properties of a metric—principally the triangle inequality. For solving this problem, the paper develops efficient triangle fixing algorithms that are based on an iterative projection method. An intriguing aspect of the metric nearness problem is that a special case turns out to be equivalent to the all pairs shortest paths problem. The paper exploits this equivalence and develops a new algorithm for the latter problem using a primal-dual method. Applications to graph clustering are provided as an illustratio n. We include experiments that demonstrate the computational superiority of triangle fixing over general purpose convex programming software. Finally, we conclude by suggesting various useful extensions and generalizations to metric nearness.

Web DOI [BibTex]

Web DOI [BibTex]


no image
Bayesian Inference and Optimal Design for the Sparse Linear Model

Seeger, MW.

Journal of Machine Learning Research, 9, pages: 759-813, April 2008 (article)

Abstract
The linear model with sparsity-favouring prior on the coefficients has important applications in many different domains. In machine learning, most methods to date search for maximum a posteriori sparse solutions and neglect to represent posterior uncertainties. In this paper, we address problems of Bayesian optimal design (or experiment planning), for which accurate estimates of uncertainty are essential. To this end, we employ expectation propagation approximate inference for the linear model with Laplace prior, giving new insight into numerical stability properties and proposing a robust algorithm. We also show how to estimate model hyperparameters by empirical Bayesian maximisation of the marginal likelihood, and propose ideas in order to scale up the method to very large underdetermined problems. We demonstrate the versatility of our framework on the application of gene regulatory network identification from micro-array expression data, where both the Laplace prior and the active experimental design approach are shown to result in significant improvements. We also address the problem of sparse coding of natural images, and show how our framework can be used for compressive sensing tasks.

PDF PDF [BibTex]

PDF PDF [BibTex]


no image
Consistency of Spectral Clustering

von Luxburg, U., Belkin, M., Bousquet, O.

Annals of Statistics, 36(2):555-586, April 2008 (article)

Abstract
Consistency is a key property of statistical algorithms when the data is drawn from some underlying probability distribution. Surprisingly, despite decades of work, little is known about consistency of most clustering algorithms. In this paper we investigate consistency of the popular family of spectral clustering algorithms, which clusters the data with the help of eigenvectors of graph Laplacian matrices. We develop new methods to establish that for increasing sample size, those eigenvectors converge to the eigenvectors of certain limit operators. As a result we can prove that one of the two major classes of spectral clustering (normalized clustering) converges under very general conditions, while the other (unnormalized clustering) is only consistent under strong additional assumptions, which are not always satisfied in real data. We conclude that our analysis provides strong evidence for the superiority of normalized spectral clustering.

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Plant Classification from Bat-Like Echolocation Signals

Yovel, Y., Franz, MO., Stilz, P., Schnitzler, H-U.

PLoS Computational Biology, 4(3, e1000032):1-13, March 2008 (article)

Abstract
Classification of plants according to their echoes is an elementary component of bat behavior that plays an important role in spatial orientation and food acquisition. Vegetation echoes are, however, highly complex stochastic signals: from an acoustical point of view, a plant can be thought of as a three-dimensional array of leaves reflecting the emitted bat call. The received echo is therefore a superposition of many reflections. In this work we suggest that the classification of these echoes might not be such a troublesome routine for bats as formerly thought. We present a rather simple approach to classifying signals from a large database of plant echoes that were created by ensonifying plants with a frequency-modulated bat-like ultrasonic pulse. Our algorithm uses the spectrogram of a single echo from which it only uses features that are undoubtedly accessible to bats. We used a standard machine learning algorithm (SVM) to automatically extract suitable linear combinations of time and frequency cues from the spectrograms such that classification with high accuracy is enabled. This demonstrates that ultrasonic echoes are highly informative about the species membership of an ensonified plant, and that this information can be extracted with rather simple, biologically plausible analysis. Thus, our findings provide a new explanatory basis for the poorly understood observed abilities of bats in classifying vegetation and other complex objects.

Web DOI [BibTex]

Web DOI [BibTex]


no image
The role of stimulus correlations for population decoding in the retina

Schwartz, G., Macke, J., Berry, M.

Computational and Systems Neuroscience 2008 (COSYNE 2008), 5, pages: 172, March 2008 (poster)

PDF Web [BibTex]

PDF Web [BibTex]


no image
Causal Reasoning by Evaluating the Complexity of Conditional Densities with Kernel Methods

Sun, X., Janzing, D., Schölkopf, B.

Neurocomputing, 71(7-9):1248-1256, March 2008 (article)

Abstract
We propose a method to quantify the complexity of conditional probability measures by a Hilbert space seminorm of the logarithm of its density. The concept of reproducing kernel Hilbert spaces (RKHSs) is a flexible tool to define such a seminorm by choosing an appropriate kernel. We present several examples with artificial data sets where our kernel-based complexity measure is consistent with our intuitive understanding of complexity of densities. The intention behind the complexity measure is to provide a new approach to inferring causal directions. The idea is that the factorization of the joint probability measure P(effect, cause) into P(effect|cause)P(cause) leads typically to "simpler" and "smoother" terms than the factorization into P(cause|effect)P(effect). Since the conventional constraint-based approach of causal discovery is not able to determine the causal direction between only two variables, our inference principle can in particular be useful when combined with other existing methods. We provide several simple examples with real-world data where the true causal directions indeed lead to simpler (conditional) densities.

Web DOI [BibTex]


no image
Natural Actor-Critic

Peters, J., Schaal, S.

Neurocomputing, 71(7-9):1180-1190, March 2008 (article)

Abstract
In this paper, we suggest a novel reinforcement learning architecture, the Natural Actor-Critic. The actor updates are achieved using stochastic policy gradients em- ploying Amari’s natural gradient approach, while the critic obtains both the natural policy gradient and additional parameters of a value function simultaneously by lin- ear regression. We show that actor improvements with natural policy gradients are particularly appealing as these are independent of coordinate frame of the chosen policy representation, and can be estimated more efficiently than regular policy gra- dients. The critic makes use of a special basis function parameterization motivated by the policy-gradient compatible function approximation. We show that several well-known reinforcement learning methods such as the original Actor-Critic and Bradtke’s Linear Quadratic Q-Learning are in fact Natural Actor-Critic algorithms. Empirical evaluations illustrate the effectiveness of our techniques in comparison to previous methods, and also demonstrate their applicability for learning control on an anthropomorphic robot arm.

PDF PDF DOI [BibTex]

PDF PDF DOI [BibTex]


no image
Poisson Geometry of Parabolic Bundles on Elliptic Curves

Balduzzi, D.

International Journal of Mathematics , 19(3):339-367, March 2008 (article)

Abstract
The moduli space of G-bundles on an elliptic curve with additional flag structure admits a Poisson structure. The bivector can be defined using double loop group, loop group and sheaf cohomology constructions. We investigate the links between these methods and for the case SL2 perform explicit computations, describing the bracket and its leaves in detail.

Web DOI [BibTex]

Web DOI [BibTex]


no image
Inferring Spike Trains From Local Field Potentials

Rasch, M., Gretton, A., Murayama, Y., Maass, W., Logothetis, N.

Journal of Neurophysiology, 99(3):1461-1476, March 2008 (article)

Abstract
We investigated whether it is possible to infer spike trains solely on the basis of the underlying local field potentials (LFPs). Using support vector machines and linear regression models, we found that in the primary visual cortex (V1) of monkeys, spikes can indeed be inferred from LFPs, at least with moderate success. Although there is a considerable degree of variation across electrodes, the low-frequency structure in spike trains (in the 100-ms range) can be inferred with reasonable accuracy, whereas exact spike positions are not reliably predicted. Two kinds of features of the LFP are exploited for prediction: the frequency power of bands in the high gamma-range (40–90 Hz) and information contained in lowfrequency oscillations ( 10 Hz), where both phase and power modulations are informative. Information analysis revealed that both features code (mainly) independent aspects of the spike-to-LFP relationship, with the low-frequency LFP phase coding for temporally clustered spiking activity. Although both features and prediction quality are similar during seminatural movie stimuli and spontaneous activity, prediction performance during spontaneous activity degrades much more slowly with increasing electrode distance. The general trend of data obtained with anesthetized animals is qualitatively mirrored in that of a more limited data set recorded in V1 of non-anesthetized monkeys. In contrast to the cortical field potentials, thalamic LFPs (e.g., LFPs derived from recordings in the dorsal lateral geniculate nucleus) hold no useful information for predicting spiking activity.

Web DOI [BibTex]

Web DOI [BibTex]


no image
ISD: A Software Package for Bayesian NMR Structure Calculation

Rieping, W., Nilges, M., Habeck, M.

Bioinformatics, 24(8):1104-1105, February 2008 (article)

Abstract
SUMMARY: The conventional approach to calculating biomolecular structures from nuclear magnetic resonance (NMR) data is often viewed as subjective due to its dependence on rules of thumb for deriving geometric constraints and suitable values for theory parameters from noisy experimental data. As a result, it can be difficult to judge the precision of an NMR structure in an objective manner. The Inferential Structure Determination (ISD) framework, which has been introduced recently, addresses this problem by using Bayesian inference to derive a probability distribution that represents both the unknown structure and its uncertainty. It also determines additional unknowns, such as theory parameters, that normally need be chosen empirically. Here we give an overview of the ISD software package, which implements this methodology. AVAILABILITY: The program is available at http://www.bioc.cam.ac.uk/isd

Web DOI [BibTex]

Web DOI [BibTex]


no image
Probabilistic Structure Calculation

Nilges, M., Habeck, M., Rieping, W.

Comptes Rendus Chimie, 11(4-5):356-369, February 2008 (article)

Abstract
Molecular structures are usually calculated from experimental data with some method of energy minimisation or non-linear optimisation. Key aims of a structure calculation are to estimate the coordinate uncertainty, and to provide a meaningful measure of the quality of the fit to the data. We discuss approaches to optimally combine prior information and experimental data and the connection to probability theory. We analyse the appropriate statistics for NOEs and NOE-derived distances, and the related question of restraint potentials. Finally, we will discuss approaches to determine the appropriate weight on the experimental evidence and to obtain in this way an estimate of the data quality from the structure calculation. Whereas objective estimates of coordinates and their uncertainties can only be obtained by a full Bayesian treatment of the problem, standard structure calculation methods continue to play an important role. To obtain the full benefit of these methods, they should be founded on a rigorous Baye sian analysis.

Web DOI [BibTex]

Web DOI [BibTex]


no image
Optimization Techniques for Semi-Supervised Support Vector Machines

Chapelle, O., Sindhwani, V., Keerthi, S.

Journal of Machine Learning Research, 9, pages: 203-233, February 2008 (article)

Abstract
Due to its wide applicability, the problem of semi-supervised classification is attracting increasing attention in machine learning. Semi-Supervised Support Vector Machines (S3VMs) are based on applying the margin maximization principle to both labeled and unlabeled examples. Unlike SVMs, their formulation leads to a non-convex optimization problem. A suite of algorithms have recently been proposed for solving S3VMs. This paper reviews key ideas in this literature. The performance and behavior of various S3VMs algorithms is studied together, under a common experimental setting.

PDF [BibTex]

PDF [BibTex]


no image
Fast Projection-based Methods for the Least Squares Nonnegative Matrix Approximation Problem

Kim, D., Sra, S., Dhillon, I.

Statistical Analysis and Data Mining, 1(1):38-51, February 2008 (article)

Abstract
Nonnegative matrix approximation (NNMA) is a popular matrix decomposition technique that has proven to be useful across a diverse variety of fields with applications ranging from document analysis and image processing to bioinformatics and signal processing. Over the years, several algorithms for NNMA have been proposed, e.g. Lee and Seung‘s multiplicative updates, alternating least squares (ALS), and gradient descent-based procedures. However, most of these procedures suffer from either slow convergence, numerical instability, or at worst, serious theoretical drawbacks. In this paper, we develop a new and improved algorithmic framework for the least-squares NNMA problem, which is not only theoretically well-founded, but also overcomes many deficiencies of other methods. Our framework readily admits powerful optimization techniques and as concrete realizations we present implementations based on the Newton, BFGS and conjugate gradient methods. Our algorithms provide numerical resu lts supe rior to both Lee and Seung‘s method as well as to the alternating least squares heuristic, which was reported to work well in some situations but has no theoretical guarantees[1]. Our approach extends naturally to include regularization and box-constraints without sacrificing convergence guarantees. We present experimental results on both synthetic and real-world datasets that demonstrate the superiority of our methods, both in terms of better approximations as well as computational efficiency.

Web DOI [BibTex]

Web DOI [BibTex]


no image
A Unifying Probabilistic Framework for Analyzing Residual Dipolar Couplings

Habeck, M., Nilges, M., Rieping, W.

Journal of Biomolecular NMR, 40(2):135-144, February 2008 (article)

Abstract
Residual dipolar couplings provide complementary information to the nuclear Overhauser effect measurements that are traditionally used in biomolecular structure determination by NMR. In a de novo structure determination, however, lack of knowledge about the degree and orientation of molecular alignment complicates the analysis of dipolar coupling data. We present a probabilistic framework for analyzing residual dipolar couplings and demonstrate that it is possible to estimate the atomic coordinates, the complete molecular alignment tensor, and the error of the couplings simultaneously. As a by-product, we also obtain estimates of the uncertainty in the coordinates and the alignment tensor. We show that our approach encompasses existing methods for determining the alignment tensor as special cases, including least squares estimation, histogram fitting, and elimination of an explicit alignment tensor in the restraint energy.

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Contour-propagation Algorithms for Semi-automated Reconstruction of Neural Processes

Macke, J., Maack, N., Gupta, R., Denk, W., Schölkopf, B., Borst, A.

Journal of Neuroscience Methods, 167(2):349-357, January 2008 (article)

Abstract
A new technique, ”Serial Block Face Scanning Electron Microscopy” (SBFSEM), allows for automatic sectioning and imaging of biological tissue with a scanning electron microscope. Image stacks generated with this technology have a resolution sufficient to distinguish different cellular compartments, including synaptic structures, which should make it possible to obtain detailed anatomical knowledge of complete neuronal circuits. Such an image stack contains several thousands of images and is recorded with a minimal voxel size of 10-20nm in the x and y- and 30nm in z-direction. Consequently, a tissue block of 1mm3 (the approximate volume of the Calliphora vicina brain) will produce several hundred terabytes of data. Therefore, highly automated 3D reconstruction algorithms are needed. As a first step in this direction we have developed semiautomated segmentation algorithms for a precise contour tracing of cell membranes. These algorithms were embedded into an easy-to-operate user interface, which allows direct 3D observation of the extracted objects during the segmentation of image stacks. Compared to purely manual tracing, processing time is greatly accelerated.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
A Quantum-Statistical-Mechanical Extension of Gaussian Mixture Model

Tanaka, K., Tsuda, K.

Journal of Physics: Conference Series, 95(012023):1-9, January 2008 (article)

Abstract
We propose an extension of Gaussian mixture models in the statistical-mechanical point of view. The conventional Gaussian mixture models are formulated to divide all points in given data to some kinds of classes. We introduce some quantum states constructed by superposing conventional classes in linear combinations. Our extension can provide a new algorithm in classifications of data by means of linear response formulas in the statistical mechanics.

PDF PDF DOI [BibTex]

PDF PDF DOI [BibTex]


no image
A Robot System for Biomimetic Navigation: From Snapshots to Metric Embeddings of View Graphs

Franz, MO., Stürzl, W., Reichardt, W., Mallot, HA.

In Robotics and Cognitive Approaches to Spatial Mapping, pages: 297-314, Springer Tracts in Advanced Robotics ; 38, (Editors: Jefferies, M.E. , W.-K. Yeap), Springer, Berlin, Germany, 2008 (inbook)

Abstract
Complex navigation behaviour (way-finding) involves recognizing several places and encoding a spatial relationship between them. Way-finding skills can be classified into a hierarchy according to the complexity of the tasks that can be performed [8]. The most basic form of way-finding is route navigation, followed by topological navigation where several routes are integrated into a graph-like representation. The highest level, survey navigation, is reached when this graph can be embedded into a common reference frame. In this chapter, we present the building blocks for a biomimetic robot navigation system that encompasses all levels of this hierarchy. As a local navigation method, we use scene-based homing. In this scheme, a goal location is characterized either by a panoramic snapshot of the light intensities as seen from the place, or by a record of the distances to the surrounding objects. The goal is found by moving in the direction that minimizes the discrepancy between the recorded intensities or distances and the current sensory input. For learning routes, the robot selects distinct views during exploration that are close enough to be reached by snapshot-based homing. When it encounters already visited places during route learning, it connects the routes and thus forms a topological representation of its environment termed a view graph. The final stage, survey navigation, is achieved by a graph embedding procedure which complements the topologic information of the view graph with odometric position estimates. Calculation of the graph embedding is done with a modified multidimensional scaling algorithm which makes use of distances and angles between nodes.

PDF PDF DOI [BibTex]

PDF PDF DOI [BibTex]


no image
Minimal Nonlinear Distortion Principle for Nonlinear Independent Component Analysis

Zhang, K., Chan, L.

Journal of Machine Learning Research, 9, pages: 2455-2487, 2008 (article)

Abstract
It is well known that solutions to the nonlinear independent component analysis (ICA) problem are highly non-unique. In this paper we propose the "minimal nonlinear distortion" (MND) principle for tackling the ill-posedness of nonlinear ICA problems. MND prefers the nonlinear ICA solution with the estimated mixing procedure as close as possible to linear, among all possible solutions. It also helps to avoid local optima in the solutions. To achieve MND, we exploit a regularization term to minimize the mean square error between the nonlinear mixing mapping and the best-fitting linear one. The effect of MND on the inherent trivial and non-trivial indeterminacies in nonlinear ICA solutions is investigated. Moreover, we show that local MND is closely related to the smoothness regularizer penalizing large curvature, which provides another useful regularization condition for nonlinear ICA. Experiments on synthetic data show the usefulness of the MND principle for separating various nonlinear mixtures. Finally, as an application, we use nonlinear ICA with MND to separate daily returns of a set of stocks in Hong Kong, and the linear causal relations among them are successfully discovered. The resulting causal relations give some interesting insights into the stock market. Such a result can not be achieved by linear ICA. Simulation studies also verify that when doing causality discovery, sometimes one should not ignore the nonlinear distortion in the data generation procedure, even if it is weak.

Web [BibTex]

Web [BibTex]


no image
Transport processes in networks with scattering ramification nodes

Radl, A.

Journal of Applied Functional Analysis, 3, pages: 461-483, 2008 (article)

Web [BibTex]

Web [BibTex]


no image
Learning to control in operational space

Peters, J., Schaal, S.

International Journal of Robotics Research, 27, pages: 197-212, 2008, clmc (article)

Abstract
One of the most general frameworks for phrasing control problems for complex, redundant robots is operational space control. However, while this framework is of essential importance for robotics and well-understood from an analytical point of view, it can be prohibitively hard to achieve accurate control in face of modeling errors, which are inevitable in com- plex robots, e.g., humanoid robots. In this paper, we suggest a learning approach for opertional space control as a direct inverse model learning problem. A first important insight for this paper is that a physically cor- rect solution to the inverse problem with redundant degrees-of-freedom does exist when learning of the inverse map is performed in a suitable piecewise linear way. The second crucial component for our work is based on the insight that many operational space controllers can be understood in terms of a constrained optimal control problem. The cost function as- sociated with this optimal control problem allows us to formulate a learn- ing algorithm that automatically synthesizes a globally consistent desired resolution of redundancy while learning the operational space controller. From the machine learning point of view, this learning problem corre- sponds to a reinforcement learning problem that maximizes an immediate reward. We employ an expectation-maximization policy search algorithm in order to solve this problem. Evaluations on a three degrees of freedom robot arm are used to illustrate the suggested approach. The applica- tion to a physically realistic simulator of the anthropomorphic SARCOS Master arm demonstrates feasibility for complex high degree-of-freedom robots. We also show that the proposed method works in the setting of learning resolved motion rate control on real, physical Mitsubishi PA-10 medical robotics arm.

link (url) DOI [BibTex]

link (url) DOI [BibTex]

2004


no image
On the representation, learning and transfer of spatio-temporal movement characteristics

Ilg, W., Bakir, GH., Mezger, J., Giese, M.

International Journal of Humanoid Robotics, 1(4):613-636, December 2004 (article)

[BibTex]

2004

[BibTex]


no image
Insect-inspired estimation of egomotion

Franz, MO., Chahl, JS., Krapp, HG.

Neural Computation, 16(11):2245-2260, November 2004 (article)

Abstract
Tangential neurons in the fly brain are sensitive to the typical optic flow patterns generated during egomotion. In this study, we examine whether a simplified linear model based on the organization principles in tangential neurons can be used to estimate egomotion from the optic flow. We present a theory for the construction of an estimator consisting of a linear combination of optic flow vectors that incorporates prior knowledge both about the distance distribution of the environment, and about the noise and egomotion statistics of the sensor. The estimator is tested on a gantry carrying an omnidirectional vision sensor. The experiments show that the proposed approach leads to accurate and robust estimates of rotation rates, whereas translation estimates are of reasonable quality, albeit less reliable.

PDF PostScript Web DOI [BibTex]

PDF PostScript Web DOI [BibTex]


no image
Efficient face detection by a cascaded support-vector machine expansion

Romdhani, S., Torr, P., Schölkopf, B., Blake, A.

Proceedings of The Royal Society of London A, 460(2501):3283-3297, A, November 2004 (article)

Abstract
We describe a fast system for the detection and localization of human faces in images using a nonlinear ‘support-vector machine‘. We approximate the decision surface in terms of a reduced set of expansion vectors and propose a cascaded evaluation which has the property that the full support-vector expansion is only evaluated on the face-like parts of the image, while the largest part of typical images is classified using a single expansion vector (a simpler and more efficient classifier). As a result, only three reduced-set vectors are used, on average, to classify an image patch. Hence, the cascaded evaluation, presented in this paper, offers a thirtyfold speed-up over an evaluation using the full set of reduced-set vectors, which is itself already thirty times faster than classification using all the support vectors.

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
S-cones contribute to flicker brightness in human vision

Wehrhahn, C., Hill, NJ., Dillenburger, B.

34(174.12), 34th Annual Meeting of the Society for Neuroscience (Neuroscience), October 2004 (poster)

Abstract
In the retina of primates three cone types sensitive to short, middle and long wavelengths of light convert photons into electrical signals. Many investigators have presented evidence that, in color normal observers, the signals of cones sensitive to short wavelengths of light (S-cones) do not contribute to the perception of brightness of a colored surface when this is alternated with an achromatic reference (flicker brightness). Other studies indicate that humans do use S-cone signals when performing this task. Common to all these studies is the small number of observers, whose performance data are reported. Considerable variability in the occurrence of cone types across observers has been found, but, to our knowledge, no cone counts exist from larger populations of humans. We reinvestigated how much the S-cones contribute to flicker brightness. 76 color normal observers were tested in a simple psychophysical procedure neutral to the cone type occurence (Teufel & Wehrhahn (2000), JOSA A 17: 994 - 1006). The data show that, in the majority of our observers, S-cones provide input with a negative sign - relative to L- and M-cone contribution - in the task in question. There is indeed considerable between-subject variability such that for 20 out of 76 observers the magnitude of this input does not differ significantly from 0. Finally, we argue that the sign of S-cone contribution to flicker brightness perception by an observer cannot be used to infer the relative sign their contributions to the neuronal signals carrying the information leading to the perception of flicker brightness. We conclude that studies which use only a small number of observers may easily fail to find significant evidence for the small but significant population tendency for the S-cones to contribute to flicker brightness. Our results confirm all earlier results and reconcile their contradictory interpretations.

Web [BibTex]

Web [BibTex]


no image
Learning Motor Primitives with Reinforcement Learning

Peters, J., Schaal, S.

AAAI Fall Symposium on Real-Life Reinforcement Learning 2004, 2004, pages: 1, October 2004 (poster)

Web [BibTex]

Web [BibTex]


no image
Learning kernels from biological networks by maximizing entropy

Tsuda, K., Noble, W.

Bioinformatics, 20(Suppl. 1):i326-i333, August 2004 (article)

Abstract
Motivation: The diffusion kernel is a general method for computing pairwise distances among all nodes in a graph, based on the sum of weighted paths between each pair of nodes. This technique has been used successfully, in conjunction with kernel-based learning methods, to draw inferences from several types of biological networks. Results: We show that computing the diffusion kernel is equivalent to maximizing the von Neumann entropy, subject to a global constraint on the sum of the Euclidean distances between nodes. This global constraint allows for high variance in the pairwise distances. Accordingly, we propose an alternative, locally constrained diffusion kernel, and we demonstrate that the resulting kernel allows for more accurate support vector machine prediction of protein functional classifications from metabolic and protein–protein interaction networks.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Masking effect produced by Mach bands on the detection of narrow bars of random polarity

Henning, GB., Hoddinott, KT., Wilson-Smith, ZJ., Hill, NJ.

Journal of the Optical Society of America, 21(8):1379-1387, A, August 2004 (article)

[BibTex]

[BibTex]


no image
Analysis of differential gene expression in healthy and osteoarthritic cartilage and isolated chondrocytes by microarray analysis

Aigner, T., Saas, J., Zien, A., Zimmer, R., Gebhard, P., Knorr, T.

In Volume 1: Cellular and Molecular Tools, pages: 109-128, (Editors: Sabatini, M., P. Pastoureau and F. De Ceuninck), Humana Press, July 2004 (inbook)

Abstract
The regulation of chondrocytes in osteoarthritic cartilage and the expression of specific gene products by these cells during early-onset and late-stage osteoarthritis are not well characterized. With the introduction of cDNA array technology, the measurement of thousands of different genes in one small tissue sample can be carried out. Interpretation of gene expression analyses in articular cartilage is aided by the fact that this tissue contains only one cell type in both normal and diseased conditions. However, care has to be taken not to over- and misinterpret results, and some major challenges must be overcome in order to utilize the potential of this technology properly in the field of osteoarthritis.

Web [BibTex]

Web [BibTex]


no image
Support Vector Channel Selection in BCI

Lal, T., Schröder, M., Hinterberger, T., Weston, J., Bogdan, M., Birbaumer, N., Schölkopf, B.

IEEE Transactions on Biomedical Engineering, 51(6):1003-1010, June 2004 (article)

Abstract
Designing a Brain Computer Interface (BCI) system one can choose from a variety of features that may be useful for classifying brain activity during a mental task. For the special case of classifying EEG signals we propose the usage of the state of the art feature selection algorithms Recursive Feature Elimination and Zero-Norm Optimization which are based on the training of Support Vector Machines (SVM). These algorithms can provide more accurate solutions than standard filter methods for feature selection. We adapt the methods for the purpose of selecting EEG channels. For a motor imagery paradigm we show that the number of used channels can be reduced significantly without increasing the classification error. The resulting best channels agree well with the expected underlying cortical activity patterns during the mental tasks. Furthermore we show how time dependent task specific information can be visualized.

DOI [BibTex]

DOI [BibTex]


no image
Distance-Based Classification with Lipschitz Functions

von Luxburg, U., Bousquet, O.

Journal of Machine Learning Research, 5, pages: 669-695, June 2004 (article)

Abstract
The goal of this article is to develop a framework for large margin classification in metric spaces. We want to find a generalization of linear decision functions for metric spaces and define a corresponding notion of margin such that the decision function separates the training points with a large margin. It will turn out that using Lipschitz functions as decision functions, the inverse of the Lipschitz constant can be interpreted as the size of a margin. In order to construct a clean mathematical setup we isometrically embed the given metric space into a Banach space and the space of Lipschitz functions into its dual space. To analyze the resulting algorithm, we prove several representer theorems. They state that there always exist solutions of the Lipschitz classifier which can be expressed in terms of distance functions to training points. We provide generalization bounds for Lipschitz classifiers in terms of the Rademacher complexities of some Lipschitz function classes. The generality of our approach can be seen from the fact that several well-known algorithms are special cases of the Lipschitz classifier, among them the support vector machine, the linear programming machine, and the 1-nearest neighbor classifier.

PDF PostScript PDF [BibTex]

PDF PostScript PDF [BibTex]


no image
Distributed Command Execution

Stark, S., Berlin, M.

In BSD Hacks: 100 industrial-strength tips & tools, pages: 152-152, (Editors: Lavigne, Dru), O’Reilly, Beijing, May 2004 (inbook)

Abstract
Often you want to execute a command not only on one computer, but on several at once. For example, you might want to report the current statistics on a group of managed servers or update all of your web servers at once.

[BibTex]

[BibTex]


no image
Learning Motor Primitives with Reinforcement Learning

Peters, J., Schaal, S.

11th Joint Symposium on Neural Computation (JSNC 2004), 11, pages: 1, May 2004 (poster)

Abstract
One of the major challenges in action generation for robotics and in the understanding of human motor control is to learn the "building blocks of move- ment generation," or more precisely, motor primitives. Recently, Ijspeert et al. [1, 2] suggested a novel framework how to use nonlinear dynamical systems as motor primitives. While a lot of progress has been made in teaching these mo- tor primitives using supervised or imitation learning, the self-improvement by interaction of the system with the environment remains a challenging problem. In this poster, we evaluate different reinforcement learning approaches can be used in order to improve the performance of motor primitives. For pursuing this goal, we highlight the difficulties with current reinforcement learning methods, and line out how these lead to a novel algorithm which is based on natural policy gradients [3]. We compare this algorithm to previous reinforcement learning algorithms in the context of dynamic motor primitive learning, and show that it outperforms these by at least an order of magnitude. We demonstrate the efficiency of the resulting reinforcement learning method for creating complex behaviors for automous robotics. The studied behaviors will include both discrete, finite tasks such as baseball swings, as well as complex rhythmic patterns as they occur in biped locomotion.

Web [BibTex]

Web [BibTex]


no image
cDNA-Microarray Technology in Cartilage Research - Functional Genomics of Osteoarthritis [in German]

Aigner, T., Finger, F., Zien, A., Bartnik, E.

Zeitschrift f{\"u}r Orthop{\"a}die und ihre Grenzgebiete, 142(2):241-247, April 2004 (article)

Abstract
Functional genomics represents a new challenging approach in order to analyze complex diseases such as osteoarthritis on a molecular level. The characterization of the molecular changes of the cartilage cells, the chondrocytes, enables a better understanding of the pathomechanisms of the disease. In particular, the identification and characterization of new target molecules for therapeutic intervention is of interest. Also, potential molecular markers for diagnosis and monitoring of osteoarthritis contribute to a more appropriate patient management. The DNA-microarray technology complements (but does not replace) biochemical and biological research in new disease-relevant genes. Large-scale functional genomics will identify molecular networks such as yet identified players in the anabolic-catabolic balance of articular cartilage as well as disease-relevant intracellular signaling cascades so far rather unknown in articular chondrocytes. However, at the moment it is also important to recognize the limitations of the microarray technology in order to avoid over-interpretation of the results. This might lead to misleading results and prevent to a significant extent a proper use of the potential of this technology in the field of osteoarthritis.

[BibTex]

[BibTex]


no image
A Compression Approach to Support Vector Model Selection

von Luxburg, U., Bousquet, O., Schölkopf, B.

Journal of Machine Learning Research, 5, pages: 293-323, April 2004 (article)

Abstract
In this paper we investigate connections between statistical learning theory and data compression on the basis of support vector machine (SVM) model selection. Inspired by several generalization bounds we construct "compression coefficients" for SVMs which measure the amount by which the training labels can be compressed by a code built from the separating hyperplane. The main idea is to relate the coding precision to geometrical concepts such as the width of the margin or the shape of the data in the feature space. The so derived compression coefficients combine well known quantities such as the radius-margin term R^2/rho^2, the eigenvalues of the kernel matrix, and the number of support vectors. To test whether they are useful in practice we ran model selection experiments on benchmark data sets. As a result we found that compression coefficients can fairly accurately predict the parameters for which the test error is minimized.

PDF [BibTex]

PDF [BibTex]


no image
Injecting noise for analysing the stability of ICA components

Harmeling, S., Meinecke, F., Müller, K.

Signal Processing, 84(2):255-266, February 2004 (article)

Abstract
Usually, noise is considered to be destructive. We present a new method that constructively injects noise to assess the reliability and the grouping structure of empirical ICA component estimates. Our method can be viewed as a Monte-Carlo-style approximation of the curvature of some performance measure at the solution. Simulations show that the true root-mean-squared angle distances between the real sources and the source estimates can be approximated well by our method. In a toy experiment, we see that we are also able to reveal the underlying grouping structure of the extracted ICA components. Furthermore, an experiment with fetal ECG data demonstrates that our approach is useful for exploratory data analysis of real-world data.

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Human Classification Behaviour Revisited by Machine Learning

Graf, A., Wichmann, F., Bülthoff, H., Schölkopf, B.

7, pages: 134, (Editors: Bülthoff, H.H., H.A. Mallot, R. Ulrich and F.A. Wichmann), 7th T{\"u}bingen Perception Conference (TWK), Febuary 2004 (poster)

Abstract
We attempt to understand visual classication in humans using both psychophysical and machine learning techniques. Frontal views of human faces were used for a gender classication task. Human subjects classied the faces and their gender judgment, reaction time (RT) and condence rating (CR) were recorded for each face. RTs are longer for incorrect answers than for correct ones, high CRs are correlated with low classication errors and RTs decrease as the CRs increase. This results suggest that patterns difcult to classify need more computation by the brain than patterns easy to classify. Hyperplane learning algorithms such as Support Vector Machines (SVM), Relevance Vector Machines (RVM), Prototype learners (Prot) and K-means learners (Kmean) were used on the same classication task using the Principal Components of the texture and oweld representation of the faces. The classication performance of the learning algorithms was estimated using the face database with the true gender of the faces as labels, and also with the gender estimated by the subjects. Kmean yield a classication performance close to humans while SVM and RVM are much better. This surprising behaviour may be due to the fact that humans are trained on real faces during their lifetime while they were here tested on articial ones, while the algorithms were trained and tested on the same set of stimuli. We then correlated the human responses to the distance of the stimuli to the separating hyperplane (SH) of the learning algorithms. On the whole stimuli far from the SH are classied more accurately, faster and with higher condence than those near to the SH if we pool data across all our subjects and stimuli. We also nd three noteworthy results. First, SVMs and RVMs can learn to classify faces using the subjects' labels but perform much better when using the true labels. Second, correlating the average response of humans (classication error, RT or CR) with the distance to the SH on a face-by-face basis using Spearman's rank correlation coefcients shows that RVMs recreate human performance most closely in every respect. Third, the mean-of-class prototype, its popularity in neuroscience notwithstanding, is the least human-like classier in all cases examined.

Web [BibTex]

Web [BibTex]


no image
m-Alternative-Forced-Choice: Improving the Efficiency of the Method of Constant Stimuli

Jäkel, F., Hill, J., Wichmann, F.

7, pages: 118, 7th T{\"u}bingen Perception Conference (TWK), February 2004 (poster)

Abstract
We explored several ways to improve the efficiency of measuring psychometric functions without resorting to adaptive procedures. a) The number m of alternatives in an m-alternative-forced-choice (m-AFC) task improves the efficiency of the method of constant stimuli. b) When alternatives are presented simultaneously on different positions on a screen rather than sequentially time can be saved and memory load for the subject can be reduced. c) A touch-screen can further help to make the experimental procedure more intuitive. We tested these ideas in the measurement of contrast sensitivity and compared them to results obtained by sequential presentation in two-interval-forced-choice (2-IFC). Qualitatively all methods (m-AFC and 2-IFC) recovered the characterictic shape of the contrast sensitivity function in three subjects. The m-AFC paradigm only took about 60% of the time of the 2-IFC task. We tried m=2,4,8 and found 4-AFC to give the best model fits and 2-AFC to have the least bias.

Web [BibTex]

Web [BibTex]


no image
Efficient Approximations for Support Vector Classifiers

Kienzle, W., Franz, M.

7, pages: 68, 7th T{\"u}bingen Perception Conference (TWK), February 2004 (poster)

Abstract
In face detection, support vector machines (SVM) and neural networks (NN) have been shown to outperform most other classication methods. While both approaches are learning-based, there are distinct advantages and drawbacks to each method: NNs are difcult to design and train but can lead to very small and efcient classiers. In comparison, SVM model selection and training is rather straightforward, and, more importantly, guaranteed to converge to a globally optimal (in the sense of training errors) solution. Unfortunately, SVM classiers tend to have large representations which are inappropriate for time-critical image processing applications. In this work, we examine various existing and new methods for simplifying support vector decision rules. Our goal is to obtain efcient classiers (as with NNs) while keeping the numerical and statistical advantages of SVMs. For a given SVM solution, we compute a cascade of approximations with increasing complexities. Each classier is tuned so that the detection rate is near 100%. At run-time, the rst (simplest) detector is evaluated on the whole image. Then, any subsequent classier is applied only to those positions that have been classied as positive throughout all previous stages. The false positive rate at the end equals that of the last (i.e. most complex) detector. In contrast, since many image positions are discarded by lower-complexity classiers, the average computation time per patch decreases signicantly compared to the time needed for evaluating the highest-complexity classier alone.

Web [BibTex]

Web [BibTex]


no image
Independent component analysis and beyond

Oja, E., Harmeling, S., Almeida, L.

Signal Processing, 84(2):215-216, February 2004 (article)

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Selective Attention to Auditory Stimuli: A Brain-Computer Interface Paradigm

Hill, N., Lal, T., Schröder, M., Hinterberger, T., Birbaumer, N., Schölkopf, B.

7, pages: 102, (Editors: Bülthoff, H.H., H.A. Mallot, R. Ulrich and F.A. Wichmann), 7th T{\"u}bingen Perception Conference (TWK), February 2004 (poster)

Abstract
During the last 20 years several paradigms for Brain Computer Interfaces have been proposed— see [1] for a recent review. They can be divided into (a) stimulus-driven paradigms, using e.g. event-related potentials or visual evoked potentials from an EEG signal, and (b) patient-driven paradigms such as those that use premotor potentials correlated with imagined action, or slow cortical potentials (e.g. [2]). Our aim is to develop a stimulus-driven paradigm that is applicable in practice to patients. Due to the unreliability of visual perception in “locked-in” patients in the later stages of disorders such as Amyotrophic Lateral Sclerosis, we concentrate on the auditory modality. Speci- cally, we look for the effects, in the EEG signal, of selective attention to one of two concurrent auditory stimulus streams, exploiting the increased activation to attended stimuli that is seen under some circumstances [3]. We present the results of our preliminary experiments on normal subjects. On each of 400 trials, two repetitive stimuli (sequences of drum-beats or other pulsed stimuli) could be heard simultaneously. The two stimuli were distinguishable from one another by their acoustic properties, by their source location (one from a speaker to the left of the subject, the other from the right), and by their differing periodicities. A visual cue preceded the stimulus by 500 msec, indicating which of the two stimuli to attend to, and the subject was instructed to count the beats in the attended stimulus stream. There were up to 6 beats of each stimulus: with equal probability on each trial, all 6 were played, or the fourth was omitted, or the fth was omitted. The 40-channel EEG signals were analyzed ofine to reconstruct which of the streams was attended on each trial. A linear Support Vector Machine [4] was trained on a random subset of the data and tested on the remainder. Results are compared from two types of pre-processing of the signal: for each stimulus stream, (a) EEG signals at the stream's beat periodicity are emphasized, or (b) EEG signals following beats are contrasted with those following missing beats. Both forms of pre-processing show promising results, i.e. that selective attention to one or the other auditory stream yields signals that are classiable signicantly above chance performance. In particular, the second pre-processing was found to be robust to reduction in the number of features used for classication (cf. [5]), helping us to eliminate noise.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Texture and Haptic Cues in Slant Discrimination: Measuring the Effect of Texture Type

Rosas, P., Wichmann, F., Ernst, M., Wagemans, J.

7, pages: 165, (Editors: Bülthoff, H. H., H. A. Mallot, R. Ulrich, F. A. Wichmann), 7th T{\"u}bingen Perception Conference (TWK), February 2004 (poster)

Abstract
In a number of models of depth cue combination the depth percept is constructed via a weighted average combination of independent depth estimations. The inuence of each cue in such average depends on the reliability of the source of information [1,5]. In particular, Ernst and Banks (2002) formulate such combination as that of the minimum variance unbiased estimator that can be constructed from the available cues. We have observed systematic differences in slant discrimination performance of human observers when different types of textures were used as cue to slant [4]. If the depth percept behaves as described above, our measurements of the slopes of the psychometric functions provide the predicted weights for the texture cue for the ranked texture types. However, the results for slant discrimination obtained when combining these texture types with object motion results are difcult to reconcile with the minimum variance unbiased estimator model [3]. This apparent failure of such model might be explained by the existence of a coupling of texture and motion, violating the assumption of independence of cues. Hillis, Ernst, Banks, and Landy (2002) [2] have shown that while for between-modality combination the human visual system has access to the single-cue information, for withinmodality combination (visual cues) the single-cue information is lost. This suggests a coupling between visual cues and independence between visual and haptic cues. Then, in the present study we combined the different texture types with haptic information in a slant discrimination task, to test whether in the between-modality condition these cues are combined as predicted by an unbiased, minimum variance estimator model. The measured weights for the cues were consistent with a combination rule sensitive to the reliability of the sources of information, but did not match the predictions of a statistically optimal combination.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Efficient Approximations for Support Vector Classiers

Kienzle, W., Franz, M.

7, pages: 68, 7th T{\"u}bingen Perception Conference (TWK), February 2004 (poster)

Abstract
In face detection, support vector machines (SVM) and neural networks (NN) have been shown to outperform most other classication methods. While both approaches are learning-based, there are distinct advantages and drawbacks to each method: NNs are difcult to design and train but can lead to very small and efcient classiers. In comparison, SVM model selection and training is rather straightforward, and, more importantly, guaranteed to converge to a globally optimal (in the sense of training errors) solution. Unfortunately, SVM classiers tend to have large representations which are inappropriate for time-critical image processing applications. In this work, we examine various existing and new methods for simplifying support vector decision rules. Our goal is to obtain efcient classiers (as with NNs) while keeping the numerical and statistical advantages of SVMs. For a given SVM solution, we compute a cascade of approximations with increasing complexities. Each classier is tuned so that the detection rate is near 100%. At run-time, the rst (simplest) detector is evaluated on the whole image. Then, any subsequent classier is applied only to those positions that have been classied as positive throughout all previous stages. The false positive rate at the end equals that of the last (i.e. most complex) detector. In contrast, since many image positions are discarded by lower-complexity classiers, the average computation time per patch decreases signicantly compared to the time needed for evaluating the highest-complexity classier alone.

Web [BibTex]

Web [BibTex]


no image
EEG Channel Selection for Brain Computer Interface Systems Based on Support Vector Methods

Schröder, M., Lal, T., Bogdan, M., Schölkopf, B.

7, pages: 50, (Editors: Bülthoff, H.H., H.A. Mallot, R. Ulrich and F.A. Wichmann), 7th T{\"u}bingen Perception Conference (TWK), February 2004 (poster)

Abstract
A Brain Computer Interface (BCI) system allows the direct interpretation of brain activity patterns (e.g. EEG signals) by a computer. Typical BCI applications comprise spelling aids or environmental control systems supporting paralyzed patients that have lost motor control completely. The design of an EEG based BCI system requires good answers for the problem of selecting useful features during the performance of a mental task as well as for the problem of classifying these features. For the special case of choosing appropriate EEG channels from several available channels, we propose the application of variants of the Support Vector Machine (SVM) for both problems. Although these algorithms do not rely on prior knowledge they can provide more accurate solutions than standard lter methods [1] for feature selection which usually incorporate prior knowledge about neural activity patterns during the performed mental tasks. For judging the importance of features we introduce a new relevance measure and apply it to EEG channels. Although we base the relevance measure for this purpose on the previously introduced algorithms, it does in general not depend on specic algorithms but can be derived using arbitrary combinations of feature selectors and classifiers.

Web [BibTex]

Web [BibTex]