Header logo is ei


2014


no image
Modeling the polygenic architecture of complex traits

Rakitsch, Barbara

Eberhard Karls Universität Tübingen, November 2014 (phdthesis)

[BibTex]

2014

[BibTex]


no image
Learning Motor Skills: From Algorithms to Robot Experiments

Kober, J., Peters, J.

97, pages: 191, Springer Tracts in Advanced Robotics, Springer, 2014 (book)

DOI [BibTex]

DOI [BibTex]


no image
Computational Diffusion MRI and Brain Connectivity

Schultz, T., Nedjati-Gilani, G., Venkataraman, A., O’Donnell, L., Panagiotaki, E.

pages: 255, Mathematics and Visualization, Springer, 2014 (book)

Web [BibTex]

Web [BibTex]


no image
A Novel Causal Inference Method for Time Series

Shajarisales, N.

Eberhard Karls Universität Tübingen, Germany, Eberhard Karls Universität Tübingen, Germany, 2014 (mastersthesis)

PDF [BibTex]

PDF [BibTex]


no image
A global analysis of extreme events and consequences for the terrestrial carbon cycle

Zscheischler, J.

Diss. No. 22043, ETH Zurich, Switzerland, ETH Zurich, Switzerland, 2014 (phdthesis)

[BibTex]

[BibTex]


no image
Development of advanced methods for improving astronomical images

Schmeißer, N.

Eberhard Karls Universität Tübingen, Germany, Eberhard Karls Universität Tübingen, Germany, 2014 (diplomathesis)

[BibTex]

[BibTex]


no image
The Feasibility of Causal Discovery in Complex Systems: An Examination of Climate Change Attribution and Detection

Lacosse, E.

Graduate Training Centre of Neuroscience, University of Tübingen, Germany, Graduate Training Centre of Neuroscience, University of Tübingen, Germany, 2014 (mastersthesis)

[BibTex]

[BibTex]


no image
Causal Discovery in the Presence of Time-Dependent Relations or Small Sample Size

Huang, B.

Graduate Training Centre of Neuroscience, University of Tübingen, Germany, Graduate Training Centre of Neuroscience, University of Tübingen, Germany, 2014 (mastersthesis)

[BibTex]

[BibTex]


no image
Analysis of Distance Functions in Graphs

Alamgir, M.

University of Hamburg, Germany, University of Hamburg, Germany, 2014 (phdthesis)

[BibTex]

[BibTex]

2002


no image
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond

Schölkopf, B., Smola, A.

pages: 644, Adaptive Computation and Machine Learning, MIT Press, Cambridge, MA, USA, December 2002, Parts of this book, including an introduction to kernel methods, can be downloaded here. (book)

Abstract
In the 1990s, a new type of learning algorithm was developed, based on results from statistical learning theory: the Support Vector Machine (SVM). This gave rise to a new class of theoretically elegant learning machines that use a central concept of SVMs-kernels—for a number of learning tasks. Kernel machines provide a modular framework that can be adapted to different tasks and domains by the choice of the kernel function and the base algorithm. They are replacing neural networks in a variety of fields, including engineering, information retrieval, and bioinformatics. Learning with Kernels provides an introduction to SVMs and related kernel methods. Although the book begins with the basics, it also includes the latest research. It provides all of the concepts necessary to enable a reader equipped with some basic mathematical knowledge to enter the world of machine learning using theoretically well-founded yet easy-to-use kernel algorithms and to understand and apply the powerful algorithms that have been developed over the last few years.

Web [BibTex]

2002

Web [BibTex]


no image
Kernel Dependency Estimation

Weston, J., Chapelle, O., Elisseeff, A., Schölkopf, B., Vapnik, V.

(98), Max Planck Institute for Biological Cybernetics, August 2002 (techreport)

Abstract
We consider the learning problem of finding a dependency between a general class of objects and another, possibly different, general class of objects. The objects can be for example: vectors, images, strings, trees or graphs. Such a task is made possible by employing similarity measures in both input and output spaces using kernel functions, thus embedding the objects into vector spaces. Output kernels also make it possible to encode prior information and/or invariances in the loss function in an elegant way. We experimentally validate our approach on several tasks: mapping strings to strings, pattern recognition, and reconstruction from partial images.

PDF [BibTex]

PDF [BibTex]


no image
Global Geometry of SVM Classifiers

Zhou, D., Xiao, B., Zhou, H., Dai, R.

Max Planck Institute for Biological Cybernetics, Tübingen, Germany, June 2002 (techreport)

Abstract
We construct an geometry framework for any norm Support Vector Machine (SVM) classifiers. Within this framework, separating hyperplanes, dual descriptions and solutions of SVM classifiers are constructed by a purely geometric fashion. In contrast with the optimization theory used in SVM classifiers, we have no complicated computations any more. Each step in our theory is guided by elegant geometric intuitions.

PDF PostScript [BibTex]

PDF PostScript [BibTex]


no image
Computationally Efficient Face Detection

Romdhani, S., Torr, P., Schölkopf, B., Blake, A.

(MSR-TR-2002-69), Microsoft Research, June 2002 (techreport)

Web [BibTex]

Web [BibTex]


no image
Nonlinear Multivariate Analysis with Geodesic Kernels

Kuss, M.

Biologische Kybernetik, Technische Universität Berlin, February 2002 (diplomathesis)

GZIP [BibTex]

GZIP [BibTex]


no image
Kernel-based nonlinear blind source separation

Harmeling, S., Ziehe, A., Kawanabe, M., Müller, K.

EU-Project BLISS, January 2002 (techreport)

GZIP [BibTex]

GZIP [BibTex]


no image
Concentration Inequalities and Empirical Processes Theory Applied to the Analysis of Learning Algorithms

Bousquet, O.

Biologische Kybernetik, Ecole Polytechnique, 2002 (phdthesis) Accepted

Abstract
New classification algorithms based on the notion of 'margin' (e.g. Support Vector Machines, Boosting) have recently been developed. The goal of this thesis is to better understand how they work, via a study of their theoretical performance. In order to do this, a general framework for real-valued classification is proposed. In this framework, it appears that the natural tools to use are Concentration Inequalities and Empirical Processes Theory. Thanks to an adaptation of these tools, a new measure of the size of a class of functions is introduced, which can be computed from the data. This allows, on the one hand, to better understand the role of eigenvalues of the kernel matrix in Support Vector Machines, and on the other hand, to obtain empirical model selection criteria.

PostScript [BibTex]


no image
Support Vector Machines: Induction Principle, Adaptive Tuning and Prior Knowledge

Chapelle, O.

Biologische Kybernetik, 2002 (phdthesis)

Abstract
This thesis presents a theoretical and practical study of Support Vector Machines (SVM) and related learning algorithms. In a first part, we introduce a new induction principle from which SVMs can be derived, but some new algorithms are also presented in this framework. In a second part, after studying how to estimate the generalization error of an SVM, we suggest to choose the kernel parameters of an SVM by minimizing this estimate. Several applications such as feature selection are presented. Finally the third part deals with the incoporation of prior knowledge in a learning algorithm and more specifically, we studied the case of known invariant transormations and the use of unlabeled data.

GZIP [BibTex]


no image
A compression approach to support vector model selection

von Luxburg, U., Bousquet, O., Schölkopf, B.

(101), Max Planck Institute for Biological Cybernetics, 2002, see more detailed JMLR version (techreport)

Abstract
In this paper we investigate connections between statistical learning theory and data compression on the basis of support vector machine (SVM) model selection. Inspired by several generalization bounds we construct ``compression coefficients'' for SVMs, which measure the amount by which the training labels can be compressed by some classification hypothesis. The main idea is to relate the coding precision of this hypothesis to the width of the margin of the SVM. The compression coefficients connect well known quantities such as the radius-margin ratio R^2/rho^2, the eigenvalues of the kernel matrix and the number of support vectors. To test whether they are useful in practice we ran model selection experiments on several real world datasets. As a result we found that compression coefficients can fairly accurately predict the parameters for which the test error is minimized.

[BibTex]

[BibTex]


no image
Feature Selection and Transduction for Prediction of Molecular Bioactivity for Drug Design

Weston, J., Perez-Cruz, F., Bousquet, O., Chapelle, O., Elisseeff, A., Schölkopf, B.

Max Planck Institute for Biological Cybernetics / Biowulf Technologies, 2002 (techreport)

Web [BibTex]

Web [BibTex]


no image
Observations on the Nyström Method for Gaussian Process Prediction

Williams, C., Rasmussen, C., Schwaighofer, A., Tresp, V.

Max Planck Institute for Biological Cybernetics, Tübingen, Germany, 2002 (techreport)

Abstract
A number of methods for speeding up Gaussian Process (GP) prediction have been proposed, including the Nystr{\"o}m method of Williams and Seeger (2001). In this paper we focus on two issues (1) the relationship of the Nystr{\"o}m method to the Subset of Regressors method (Poggio and Girosi 1990; Luo and Wahba, 1997) and (2) understanding in what circumstances the Nystr{\"o}m approximation would be expected to provide a good approximation to exact GP regression.

PostScript [BibTex]

PostScript [BibTex]

2001


no image
Kernel Methods for Extracting Local Image Semantics

Bradshaw, B., Schölkopf, B., Platt, J.

(MSR-TR-2001-99), Microsoft Research, October 2001 (techreport)

Web [BibTex]

2001

Web [BibTex]


no image
Calibration of Digital Amateur Cameras

Urbanek, M., Horaud, R., Sturm, P.

(RR-4214), INRIA Rhone Alpes, Montbonnot, France, July 2001 (techreport)

Web [BibTex]

Web [BibTex]


no image
Variationsverfahren zur Untersuchung von Grundzustandseigenschaften des Ein-Band Hubbard-Modells

Eichhorn, J.

Biologische Kybernetik, Technische Universität Dresden, Dresden/Germany, May 2001 (diplomathesis)

Abstract
Using different modifications of a new variational approach, statical groundstate properties of the one-band Hubbard model such as energy and staggered magnetisation are calculated. By taking into account additional fluctuations, the method ist gradually improved so that a very good description of the energy in one and two dimensions can be achieved. After a detailed discussion of the application in one dimension, extensions for two dimensions are introduced. By use of a modified version of the variational ansatz in particular a description of the quantum phase transition for the magnetisation should be possible.

PostScript [BibTex]

PostScript [BibTex]


no image
Cerebellar Control of Robot Arms

Peters, J.

Biologische Kybernetik, Technische Univeristät München, München, Germany, 2001 (diplomathesis)

[BibTex]

[BibTex]


no image
Incorporating Invariances in Non-Linear Support Vector Machines

Chapelle, O., Schölkopf, B.

Max Planck Institute for Biological Cybernetics / Biowulf Technologies, 2001 (techreport)

Abstract
We consider the problem of how to incorporate in the Support Vector Machine (SVM) framework invariances given by some a priori known transformations under which the data should be invariant. It extends some previous work which was only applicable with linear SVMs and we show on a digit recognition task that the proposed approach is superior to the traditional Virtual Support Vector method.

PostScript [BibTex]

PostScript [BibTex]


no image
On Unsupervised Learning of Mixtures of Markov Sources

Seldin, Y.

Biologische Kybernetik, The Hebrew University of Jerusalem, Israel, 2001 (diplomathesis)

PDF [BibTex]

PDF [BibTex]


no image
Bound on the Leave-One-Out Error for Density Support Estimation using nu-SVMs

Gretton, A., Herbrich, R., Schölkopf, B., Smola, A., Rayner, P.

University of Cambridge, 2001 (techreport)

[BibTex]

[BibTex]


no image
Bound on the Leave-One-Out Error for 2-Class Classification using nu-SVMs

Gretton, A., Herbrich, R., Schölkopf, B., Rayner, P.

University of Cambridge, 2001, Updated May 2003 (literature review expanded) (techreport)

Abstract
Three estimates of the leave-one-out error for $nu$-support vector (SV) machine binary classifiers are presented. Two of the estimates are based on the geometrical concept of the {em span}, which was introduced in the context of bounding the leave-one-out error for $C$-SV machine binary classifiers, while the third is based on optimisation over the criterion used to train the $nu$-support vector classifier. It is shown that the estimates presented herein provide informative and efficient approximations of the generalisation behaviour, in both a toy example and benchmark data sets. The proof strategies in the $nu$-SV context are also compared with those used to derive leave-one-out error estimates in the $C$-SV case.

PostScript [BibTex]

PostScript [BibTex]


no image
Some kernels for structured data

Bartlett, P., Schölkopf, B.

Biowulf Technologies, 2001 (techreport)

[BibTex]

[BibTex]


no image
Support Vector Machines: Theorie und Anwendung auf Prädiktion epileptischer Anfälle auf der Basis von EEG-Daten

Lal, TN.

Biologische Kybernetik, Institut für Angewandte Mathematik, Universität Bonn, 2001, Advised by Prof. Dr. S. Albeverio (diplomathesis)

ZIP [BibTex]

ZIP [BibTex]


no image
Inference Principles and Model Selection

Buhmann, J., Schölkopf, B.

(01301), Dagstuhl Seminar, 2001 (techreport)

Web [BibTex]

Web [BibTex]

2000


no image
Advances in Large Margin Classifiers

Smola, A., Bartlett, P., Schölkopf, B., Schuurmans, D.

pages: 422, Neural Information Processing, MIT Press, Cambridge, MA, USA, October 2000 (book)

Abstract
The concept of large margins is a unifying principle for the analysis of many different approaches to the classification of data from examples, including boosting, mathematical programming, neural networks, and support vector machines. The fact that it is the margin, or confidence level, of a classification--that is, a scale parameter--rather than a raw training error that matters has become a key tool for dealing with classifiers. This book shows how this idea applies to both the theoretical analysis and the design of algorithms. The book provides an overview of recent developments in large margin classifiers, examines connections with other methods (e.g., Bayesian inference), and identifies strengths and weaknesses of the method, as well as directions for future research. Among the contributors are Manfred Opper, Vladimir Vapnik, and Grace Wahba.

Web [BibTex]

2000

Web [BibTex]


no image
Three-dimensional reconstruction of planar scenes

Urbanek, M.

Biologische Kybernetik, INP Grenoble, Warsaw University of Technology, September 2000 (diplomathesis)

Abstract
For a planar scene, we propose an algorithm to estimate its 3D structure. Homographies between corresponding planes are employed in order to recover camera motion parameters - between camera positions from which images of the scene were taken. Cases of one- and multiple- corresponding planes present on the scene are distinguished. Solutions are proposed for both cases.

ZIP [BibTex]

ZIP [BibTex]


no image
Intelligence as a Complex System

Zhou, D.

Biologische Kybernetik, 2000 (phdthesis)

[BibTex]

[BibTex]


no image
Neural Networks in Robot Control

Peters, J.

Biologische Kybernetik, Fernuniversität Hagen, Hagen, Germany, 2000 (diplomathesis)

[BibTex]

[BibTex]


no image
The Kernel Trick for Distances

Schölkopf, B.

(MSR-TR-2000-51), Microsoft Research, Redmond, WA, USA, 2000 (techreport)

Abstract
A method is described which, like the kernel trick in support vector machines (SVMs), lets us generalize distance-based algorithms to operate in feature spaces, usually nonlinearly related to the input space. This is done by identifying a class of kernels which can be represented as normbased distances in Hilbert spaces. It turns out that common kernel algorithms, such as SVMs and kernel PCA, are actually really distance based algorithms and can be run with that class of kernels, too. As well as providing a useful new insight into how these algorithms work, the present work can form the basis for conceiving new algorithms.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Kernel method for percentile feature extraction

Schölkopf, B., Platt, J., Smola, A.

(MSR-TR-2000-22), Microsoft Research, 2000 (techreport)

Abstract
A method is proposed which computes a direction in a dataset such that a speci􏰘ed fraction of a particular class of all examples is separated from the overall mean by a maximal margin􏰤 The pro jector onto that direction can be used for class􏰣speci􏰘c feature extraction􏰤 The algorithm is carried out in a feature space associated with a support vector kernel function􏰢 hence it can be used to construct a large class of nonlinear fea􏰣 ture extractors􏰤 In the particular case where there exists only one class􏰢 the method can be thought of as a robust form of principal component analysis􏰢 where instead of variance we maximize percentile thresholds􏰤 Fi􏰣 nally􏰢 we generalize it to also include the possibility of specifying negative examples􏰤

PDF [BibTex]

PDF [BibTex]