Header logo is ei


2007


no image
Some Theoretical Aspects of Human Categorization Behavior: Similarity and Generalization

Jäkel, F.

Biologische Kybernetik, Eberhard-Karls-Universität Tübingen, Tübingen, Germany, November 2007, passed with "ausgezeichnet", summa cum laude, published online (phdthesis)

PDF [BibTex]

2007

PDF [BibTex]


no image
Statistical Learning Theory Approaches to Clustering

Jegelka, S.

Biologische Kybernetik, Eberhard-Karls-Universität Tübingen, Tübingen, Germany, November 2007 (diplomathesis)

PDF [BibTex]

PDF [BibTex]


no image
Support Vector Machine Learning for Interdependent and Structured Output Spaces

Altun, Y., Hofmann, T., Tsochantaridis, I.

In Predicting Structured Data, pages: 85-104, Advances in neural information processing systems, (Editors: Bakir, G. H. , T. Hofmann, B. Schölkopf, A. J. Smola, B. Taskar, S. V. N. Vishwanathan), MIT Press, Cambridge, MA, USA, September 2007 (inbook)

Web [BibTex]

Web [BibTex]


no image
Brisk Kernel ICA

Jegelka, S., Gretton, A.

In Large Scale Kernel Machines, pages: 225-250, Neural Information Processing, (Editors: Bottou, L. , O. Chapelle, D. DeCoste, J. Weston), MIT Press, Cambridge, MA, USA, September 2007 (inbook)

Abstract
Recent approaches to independent component analysis have used kernel independence measures to obtain very good performance in ICA, particularly in areas where classical methods experience difficulty (for instance, sources with near-zero kurtosis). In this chapter, we compare two efficient extensions of these methods for large-scale problems: random subsampling of entries in the Gram matrices used in defining the independence measures, and incomplete Cholesky decomposition of these matrices. We derive closed-form, efficiently computable approximations for the gradients of these measures, and compare their performance on ICA using both artificial and music data. We show that kernel ICA can scale up to much larger problems than yet attempted, and that incomplete Cholesky decomposition performs better than random sampling.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Training a Support Vector Machine in the Primal

Chapelle, O.

In Large Scale Kernel Machines, pages: 29-50, Neural Information Processing, (Editors: Bottou, L. , O. Chapelle, D. DeCoste, J. Weston), MIT Press, Cambridge, MA, USA, September 2007, This is a slightly updated version of the Neural Computation paper (inbook)

Abstract
Most literature on Support Vector Machines (SVMs) concentrate on the dual optimization problem. In this paper, we would like to point out that the primal problem can also be solved efficiently, both for linear and non-linear SVMs, and that there is no reason to ignore this possibility. On the contrary, from the primal point of view new families of algorithms for large scale SVM training can be investigated.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Approximation Methods for Gaussian Process Regression

Quiñonero-Candela, J., Rasmussen, CE., Williams, CKI.

In Large-Scale Kernel Machines, pages: 203-223, Neural Information Processing, (Editors: Bottou, L. , O. Chapelle, D. DeCoste, J. Weston), MIT Press, Cambridge, MA, USA, September 2007 (inbook)

Abstract
A wealth of computationally efficient approximation methods for Gaussian process regression have been recently proposed. We give a unifying overview of sparse approximations, following Quiñonero-Candela and Rasmussen (2005), and a brief review of approximate matrix-vector multiplication methods.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Trading Convexity for Scalability

Collobert, R., Sinz, F., Weston, J., Bottou, L.

In Large Scale Kernel Machines, pages: 275-300, Neural Information Processing, (Editors: Bottou, L. , O. Chapelle, D. DeCoste, J. Weston), MIT Press, Cambridge, MA, USA, September 2007 (inbook)

Abstract
Convex learning algorithms, such as Support Vector Machines (SVMs), are often seen as highly desirable because they offer strong practical properties and are amenable to theoretical analysis. However, in this work we show how nonconvexity can provide scalability advantages over convexity. We show how concave-convex programming can be applied to produce (i) faster SVMs where training errors are no longer support vectors, and (ii) much faster Transductive SVMs.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Classifying Event-Related Desynchronization in EEG, ECoG and MEG signals

Hill, N., Lal, T., Tangermann, M., Hinterberger, T., Widman, G., Elger, C., Schölkopf, B., Birbaumer, N.

In Toward Brain-Computer Interfacing, pages: 235-260, Neural Information Processing, (Editors: G Dornhege and J del R Millán and T Hinterberger and DJ McFarland and K-R Müller), MIT Press, Cambridge, MA, USA, September 2007 (inbook)

PDF Web [BibTex]

PDF Web [BibTex]


no image
Joint Kernel Maps

Weston, J., Bakir, G., Bousquet, O., Mann, T., Noble, W., Schölkopf, B.

In Predicting Structured Data, pages: 67-84, Advances in neural information processing systems, (Editors: GH Bakir and T Hofmann and B Schölkopf and AJ Smola and B Taskar and SVN Vishwanathan), MIT Press, Cambridge, MA, USA, September 2007 (inbook)

Web [BibTex]

Web [BibTex]


no image
Brain-Computer Interfaces for Communication in Paralysis: A Clinical Experimental Approach

Hinterberger, T., Nijboer, F., Kübler, A., Matuz, T., Furdea, A., Mochty, U., Jordan, M., Lal, T., Hill, J., Mellinger, J., Bensch, M., Tangermann, M., Widman, G., Elger, C., Rosenstiel, W., Schölkopf, B., Birbaumer, N.

In Toward Brain-Computer Interfacing, pages: 43-64, Neural Information Processing, (Editors: G. Dornhege and J del R Millán and T Hinterberger and DJ McFarland and K-R Müller), MIT Press, Cambridge, MA, USA, September 2007 (inbook)

PDF Web [BibTex]

PDF Web [BibTex]


no image
Error Correcting Codes for the P300 Visual Speller

Biessmann, F.

Biologische Kybernetik, Eberhard-Karls-Universität Tübingen, Tübingen, Germany, July 2007 (diplomathesis)

Abstract
The aim of brain-computer interface (BCI) research is to establish a communication system based on intentional modulation of brain activity. This is accomplished by classifying patterns of brain ac- tivity, volitionally induced by the user. The BCI presented in this study is based on a classical paradigm as proposed by (Farwell and Donchin, 1988), the P300 visual speller. Recording electroencephalo- grams (EEG) from the scalp while presenting letters successively to the user, the speller can infer from the brain signal which letter the user was focussing on. Since EEG recordings are noisy, usually many repetitions are needed to detect the correct letter. The focus of this study was to improve the accuracy of the visual speller applying some basic principles from information theory: Stimulus sequences of the speller have been modified into error-correcting codes. Additionally a language model was incorporated into the probabilistic letter de- coder. Classification of single EEG epochs was less accurate using error correcting codes. However, the novel code could compensate for that such that overall, letter accuracies were as high as or even higher than for classical stimulus codes. In particular at high noise levels, error-correcting decoding achieved higher letter accuracies.

PDF [BibTex]

PDF [BibTex]


no image
Nonparametric Bayesian Discrete Latent Variable Models for Unsupervised Learning

Görür, D.

Biologische Kybernetik, Technische Universität Berlin, Berlin, Germany, April 2007, published online (phdthesis)

PDF PDF [BibTex]

PDF PDF [BibTex]


no image
Probabilistic Structure Calculation

Rieping, W., Habeck, M., Nilges, M.

In Structure and Biophysics: New Technologies for Current Challenges in Biology and Beyond, pages: 81-98, NATO Security through Science Series, (Editors: Puglisi, J. D.), Springer, Berlin, Germany, March 2007 (inbook)

Web DOI [BibTex]

Web DOI [BibTex]


no image
Applications of Kernel Machines to Structured Data

Eichhorn, J.

Biologische Kybernetik, Technische Universität Berlin, Berlin, Germany, March 2007, passed with "sehr gut", published online (phdthesis)

PDF [BibTex]

PDF [BibTex]


no image
A priori Knowledge from Non-Examples

Sinz, FH.

Biologische Kybernetik, Eberhard-Karls-Universität Tübingen, Tübingen, Germany, March 2007 (diplomathesis)

PDF Web [BibTex]

PDF Web [BibTex]


no image
Machine Learning for Mass Production and Industrial Engineering

Pfingsten, T.

Biologische Kybernetik, Eberhard-Karls-Universität Tübingen, Tübingen, Germany, February 2007 (phdthesis)

PDF [BibTex]

PDF [BibTex]


no image
On the Pre-Image Problem in Kernel Methods

BakIr, G., Schölkopf, B., Weston, J.

In Kernel Methods in Bioengineering, Signal and Image Processing, pages: 284-302, (Editors: G Camps-Valls and JL Rojo-Álvarez and M Martínez-Ramón), Idea Group Publishing, Hershey, PA, USA, January 2007 (inbook)

Abstract
In this chapter we are concerned with the problem of reconstructing patterns from their representation in feature space, known as the pre-image problem. We review existing algorithms and propose a learning based approach. All algorithms are discussed regarding their usability and complexity and evaluated on an image denoising application.

DOI [BibTex]

DOI [BibTex]


no image
Development of a Brain-Computer Interface Approach Based on Covert Attention to Tactile Stimuli

Raths, C.

University of Tübingen, Germany, University of Tübingen, Germany, January 2007 (diplomathesis)

[BibTex]

[BibTex]


no image
A Machine Learning Approach for Estimating the Attenuation Map for a Combined PET/MR Scanner

Hofmann, M.

Biologische Kybernetik, Max-Planck Institute for Biological Cybernetics, Tübingen, Germany, 2007 (diplomathesis)

[BibTex]

[BibTex]

2005


no image
Extension to Kernel Dependency Estimation with Applications to Robotics

BakIr, G.

Biologische Kybernetik, Technische Universität Berlin, Berlin, November 2005 (phdthesis)

Abstract
Kernel Dependency Estimation(KDE) is a novel technique which was designed to learn mappings between sets without making assumptions on the type of the involved input and output data. It learns the mapping in two stages. In a first step, it tries to estimate coordinates of a feature space representation of elements of the set by solving a high dimensional multivariate regression problem in feature space. Following this, it tries to reconstruct the original representation given the estimated coordinates. This thesis introduces various algorithmic extensions to both stages in KDE. One of the contributions of this thesis is to propose a novel linear regression algorithm that explores low-dimensional subspaces during learning. Furthermore various existing strategies for reconstructing patterns from feature maps involved in KDE are discussed and novel pre-image techniques are introduced. In particular, pre-image techniques for data-types that are of discrete nature such as graphs and strings are investigated. KDE is then explored in the context of robot pose imitation where the input is a an image with a human operator and the output is the robot articulated variables. Thus, using KDE, robot pose imitation is formulated as a regression problem.

PDF PDF [BibTex]

2005

PDF PDF [BibTex]


no image
Geometrical aspects of statistical learning theory

Hein, M.

Biologische Kybernetik, Darmstadt, Darmstadt, November 2005 (phdthesis)

PDF [BibTex]

PDF [BibTex]


no image
Implicit Surfaces For Modelling Human Heads

Steinke, F.

Biologische Kybernetik, Eberhard-Karls-Universität, Tübingen, September 2005 (diplomathesis)

[BibTex]

[BibTex]


no image
Machine Learning Methods for Brain-Computer Interdaces

Lal, TN.

Biologische Kybernetik, University of Darmstadt, September 2005 (phdthesis)

Web [BibTex]

Web [BibTex]


no image
Efficient Adaptive Sampling of the Psychometric Function by Maximizing Information Gain

Tanner, TG.

Biologische Kybernetik, Eberhard-Karls University Tübingen, Tübingen, Germany, May 2005 (diplomathesis)

Abstract
A common task in psychophysics is to measure the psychometric function. A psychometric function can be described by its shape and four parameters: offset or threshold, slope or width, false alarm rate or chance level and miss or lapse rate. Depending on the parameters of interest some points on the psychometric function may be more informative than others. Adaptive methods attempt to place trials on the most informative points based on the data collected in previous trials. A new Bayesian adaptive psychometric method placing trials by minimising the expected entropy of the posterior probabilty dis- tribution over a set of possible stimuli is introduced. The method is more flexible, faster and at least as efficient as the established method (Kontsevich and Tyler, 1999). Comparably accurate (2dB) threshold and slope estimates can be obtained after about 30 and 500 trials, respectively. By using a dynamic termination criterion the efficiency can be further improved. The method can be applied to all experimental designs including yes/no designs and allows acquisition of any set of free parameters. By weighting the importance of parameters one can include nuisance parameters and adjust the relative expected errors. Use of nuisance parameters may lead to more accurate estimates than assuming a guessed fixed value. Block designs are supported and do not harm the performance if a sufficient number of trials are performed. The method was evaluated by computer simulations in which the role of parametric assumptions, its robustness, the quality of different point estimates, the effect of dynamic termination criteria and many other settings were investigated.

[BibTex]

[BibTex]


no image
Support Vector Machines and Kernel Algorithms

Schölkopf, B., Smola, A.

In Encyclopedia of Biostatistics (2nd edition), Vol. 8, 8, pages: 5328-5335, (Editors: P Armitage and T Colton), John Wiley & Sons, NY USA, 2005 (inbook)

[BibTex]

[BibTex]


no image
Visual perception I: Basic principles

Wagemans, J., Wichmann, F., de Beeck, H.

In Handbook of Cognition, pages: 3-47, (Editors: Lamberts, K. , R. Goldstone), Sage, London, 2005 (inbook)

[BibTex]

[BibTex]