Header logo is ei


2006


no image
A New Projected Quasi-Newton Approach for the Nonnegative Least Squares Problem

Kim, D., Sra, S., Dhillon, I.

(TR-06-54), Univ. of Texas, Austin, December 2006 (techreport)

PDF [BibTex]

2006

PDF [BibTex]


no image
Probabilistic inference for solving (PO)MDPs

Toussaint, M., Harmeling, S., Storkey, A.

(934), School of Informatics, University of Edinburgh, December 2006 (techreport)

PDF [BibTex]

PDF [BibTex]


no image
Minimal Logical Constraint Covering Sets

Sinz, F., Schölkopf, B.

(155), Max Planck Institute for Biological Cybernetics, Tübingen, December 2006 (techreport)

Abstract
We propose a general framework for computing minimal set covers under class of certain logical constraints. The underlying idea is to transform the problem into a mathematical programm under linear constraints. In this sense it can be seen as a natural extension of the vector quantization algorithm proposed by Tipping and Schoelkopf. We show which class of logical constraints can be cast and relaxed into linear constraints and give an algorithm for the transformation.

PDF [BibTex]

PDF [BibTex]


no image
New Methods for the P300 Visual Speller

Biessmann, F.

(1), (Editors: Hill, J. ), Max-Planck Institute for Biological Cybernetics, Tübingen, Germany, November 2006 (techreport)

PDF [BibTex]

PDF [BibTex]


no image
Geometric Analysis of Hilbert Schmidt Independence criterion based ICA contrast function

Shen, H., Jegelka, S., Gretton, A.

(PA006080), National ICT Australia, Canberra, Australia, October 2006 (techreport)

Web [BibTex]

Web [BibTex]


no image
Extraction of visual features from natural video data using Slow Feature Analysis

Nickisch, H.

Biologische Kybernetik, Technische Universität Berlin, Berlin, Germany, September 2006 (diplomathesis)

Abstract
Das Forschungsprojekt NeuRoBot hat das un{\"u}berwachte Erlernen einer neuronal inspirierten Steuerungsarchitektur zum Ziel, und zwar unter den Randbedingungen biologischer Plausibilit{\"a}t und der Benutzung einer Kamera als einzigen Sensor. Visuelle Merkmale, die ein angemessenes Abbild der Umgebung liefern, sind unerl{\"a}sslich, um das Ziel kollisionsfreier Navigation zu erreichen. Zeitliche Koh{\"a}renz ist ein neues Lernprinzip, das in der Lage ist, Erkenntnisse aus der Biologie des Sehens zu reproduzieren. Es wird durch die Beobachtung motiviert, dass die “Sensoren” der Retina auf deutlich k{\"u}rzeren Zeitskalen variieren als eine abstrakte Beschreibung. Zeitliche Langsamkeitsanalyse l{\"o}st das Problem, indem sie zeitlich langsam ver{\"a}nderliche Signale aus schnell ver{\"a}nderlichen Eingabesignalen extrahiert. Eine Verallgemeinerung auf Signale, die nichtlinear von den Eingaben abh{\"a}ngen, ist durch die Anwendung des Kernel-Tricks m{\"o}glich. Das einzig benutzte Vorwissen ist die zeitliche Glattheit der gewonnenen Signale. In der vorliegenden Diplomarbeit wird Langsamkeitsanalyse auf Bildausschnitte von Videos einer Roboterkamera und einer Simulationsumgebung angewendet. Zuallererst werden mittels Parameterexploration und Kreuzvalidierung die langsamst m{\"o}glichen Funktionen bestimmt. Anschließend werden die Merkmalsfunktionen analysiert und einige Ansatzpunkte f{\"u}r ihre Interpretation angegeben. Aufgrund der sehr großen Datens{\"a}tze und der umfangreichen Berechnungen behandelt ein Großteil dieser Arbeit auch Aufwandsbetrachtungen und Fragen der effizienten Berechnung. Kantendetektoren in verschiedenen Phasen und mit haupts{\"a}chlich horizontaler Orientierung stellen die wichtigsten aus der Analyse hervorgehenden Funktionen dar. Eine Anwendung auf konkrete Navigationsaufgaben des Roboters konnte bisher nicht erreicht werden. Eine visuelle Interpretation der erlernten Merkmale ist jedoch durchaus gegeben.

PDF [BibTex]

PDF [BibTex]


no image
An Online-Computation Approach to Optimal Finite-Horizon State-Feedback Control of Nonlinear Stochastic Systems

Deisenroth, MP.

Biologische Kybernetik, Universität Karlsruhe (TH), Karlsruhe, Germany, August 2006 (diplomathesis)

PDF [BibTex]

PDF [BibTex]


no image
A tutorial on spectral clustering

von Luxburg, U.

(149), Max Planck Institute for Biological Cybernetics, Tübingen, August 2006 (techreport)

Abstract
In recent years, spectral clustering has become one of the most popular modern clustering algorithms. It is simple to implement, can be solved efficiently by standard linear algebra software, and very often outperforms traditional clustering algorithms such as the k-means algorithm. Nevertheless, on the first glance spectral clustering looks a bit mysterious, and it is not obvious to see why it works at all and what it really does. This article is a tutorial introduction to spectral clustering. We describe different graph Laplacians and their basic properties, present the most common spectral clustering algorithms, and derive those algorithms from scratch by several different approaches. Advantages and disadvantages of the different spectral clustering algorithms are discussed.

PDF [BibTex]

PDF [BibTex]


no image
Towards the Inference of Graphs on Ordered Vertexes

Zien, A., Raetsch, G., Ong, C.

(150), Max Planck Institute for Biological Cybernetics, Tübingen, August 2006 (techreport)

Abstract
We propose novel methods for machine learning of structured output spaces. Specifically, we consider outputs which are graphs with vertices that have a natural order. We consider the usual adjacency matrix representation of graphs, as well as two other representations for such a graph: (a) decomposing the graph into a set of paths, (b) converting the graph into a single sequence of nodes with labeled edges. For each of the three representations, we propose an encoding and decoding scheme. We also propose an evaluation measure for comparing two graphs.

PDF [BibTex]

PDF [BibTex]


no image
Object Classification using Local Image Features

Nowozin, S.

Biologische Kybernetik, Technical University of Berlin, Berlin, Germany, May 2006 (diplomathesis)

Abstract
Object classification in digital images remains one of the most challenging tasks in computer vision. Advances in the last decade have produced methods to repeatably extract and describe characteristic local features in natural images. In order to apply machine learning techniques in computer vision systems, a representation based on these features is needed. A set of local features is the most popular representation and often used in conjunction with Support Vector Machines for classification problems. In this work, we examine current approaches based on set representations and identify their shortcomings. To overcome these shortcomings, we argue for extending the set representation into a graph representation, encoding more relevant information. Attributes associated with the edges of the graph encode the geometric relationships between individual features by making use of the meta data of each feature, such as the position, scale, orientation and shape of the feature region. At the same time all invariances provided by the original feature extraction method are retained. To validate the novel approach, we use a standard subset of the ETH-80 classification benchmark.

PDF [BibTex]

PDF [BibTex]


no image
Nonnegative Matrix Approximation: Algorithms and Applications

Sra, S., Dhillon, I.

Univ. of Texas, Austin, May 2006 (techreport)

[BibTex]

[BibTex]


no image
An Automated Combination of Sequence Motif Kernels for Predicting Protein Subcellular Localization

Zien, A., Ong, C.

(146), Max Planck Institute for Biological Cybernetics, Tübingen, April 2006 (techreport)

Abstract
Protein subcellular localization is a crucial ingredient to many important inferences about cellular processes, including prediction of protein function and protein interactions. While many predictive computational tools have been proposed, they tend to have complicated architectures and require many design decisions from the developer. We propose an elegant and fully automated approach to building a prediction system for protein subcellular localization. We propose a new class of protein sequence kernels which considers all motifs including motifs with gaps. This class of kernels allows the inclusion of pairwise amino acid distances into their computation. We further propose a multiclass support vector machine method which directly solves protein subcellular localization without resorting to the common approach of splitting the problem into several binary classification problems. To automatically search over families of possible amino acid motifs, we generalize our method to optimize over multiple kernels at the same time. We compare our automated approach to four other predictors on three different datasets.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Training a Support Vector Machine in the Primal

Chapelle, O.

(147), Max Planck Institute for Biological Cybernetics, Tübingen, April 2006, The version in the "Large Scale Kernel Machines" book is more up to date. (techreport)

Abstract
Most literature on Support Vector Machines (SVMs) concentrate on the dual optimization problem. In this paper, we would like to point out that the primal problem can also be solved efficiently, both for linear and non-linear SVMs, and there is no reason for ignoring it. Moreover, from the primal point of view, new families of algorithms for large scale SVM training can be investigated.

PDF [BibTex]

PDF [BibTex]


no image
Kernel PCA for Image Compression

Huhle, B.

Biologische Kybernetik, Eberhard-Karls-Universität, Tübingen, Germany, April 2006 (diplomathesis)

PDF [BibTex]

PDF [BibTex]


no image
Gaussian Process Models for Robust Regression, Classification, and Reinforcement Learning

Kuss, M.

Biologische Kybernetik, Technische Universität Darmstadt, Darmstadt, Germany, March 2006, passed with distinction, published online (phdthesis)

PDF [BibTex]

PDF [BibTex]


no image
Cross-Validation Optimization for Structured Hessian Kernel Methods

Seeger, M., Chapelle, O.

Max-Planck Institute for Biological Cybernetics, Tübingen, Germany, February 2006 (techreport)

Abstract
We address the problem of learning hyperparameters in kernel methods for which the Hessian of the objective is structured. We propose an approximation to the cross-validation log likelihood whose gradient can be computed analytically, solving the hyperparameter learning problem efficiently through nonlinear optimization. Crucially, our learning method is based entirely on matrix-vector multiplication primitives with the kernel matrices and their derivatives, allowing straightforward specialization to new kernels or to large datasets. When applied to the problem of multi-way classification, our method scales linearly in the number of classes and gives rise to state-of-the-art results on a remote imaging task.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Semigroups applied to transport and queueing processes

Radl, A.

Biologische Kybernetik, Eberhard Karls Universität, Tübingen, 2006 (phdthesis)

PDF [BibTex]

PDF [BibTex]


no image
Local Alignment Kernels for Protein Homology Detection

Saigo, H.

Biologische Kybernetik, Kyoto University, Kyoto, Japan, 2006 (phdthesis)

[BibTex]

[BibTex]


Thumb xl screen shot 2012 06 06 at 11.31.38 am
Implicit Wiener Series, Part II: Regularised estimation

Gehler, P., Franz, M.

(148), Max Planck Institute, 2006 (techreport)

pdf [BibTex]

2001


no image
Kernel Methods for Extracting Local Image Semantics

Bradshaw, B., Schölkopf, B., Platt, J.

(MSR-TR-2001-99), Microsoft Research, October 2001 (techreport)

Web [BibTex]

2001

Web [BibTex]


no image
Calibration of Digital Amateur Cameras

Urbanek, M., Horaud, R., Sturm, P.

(RR-4214), INRIA Rhone Alpes, Montbonnot, France, July 2001 (techreport)

Web [BibTex]

Web [BibTex]


no image
Variationsverfahren zur Untersuchung von Grundzustandseigenschaften des Ein-Band Hubbard-Modells

Eichhorn, J.

Biologische Kybernetik, Technische Universität Dresden, Dresden/Germany, May 2001 (diplomathesis)

Abstract
Using different modifications of a new variational approach, statical groundstate properties of the one-band Hubbard model such as energy and staggered magnetisation are calculated. By taking into account additional fluctuations, the method ist gradually improved so that a very good description of the energy in one and two dimensions can be achieved. After a detailed discussion of the application in one dimension, extensions for two dimensions are introduced. By use of a modified version of the variational ansatz in particular a description of the quantum phase transition for the magnetisation should be possible.

PostScript [BibTex]

PostScript [BibTex]


no image
Cerebellar Control of Robot Arms

Peters, J.

Biologische Kybernetik, Technische Univeristät München, München, Germany, 2001 (diplomathesis)

[BibTex]

[BibTex]


no image
Incorporating Invariances in Non-Linear Support Vector Machines

Chapelle, O., Schölkopf, B.

Max Planck Institute for Biological Cybernetics / Biowulf Technologies, 2001 (techreport)

Abstract
We consider the problem of how to incorporate in the Support Vector Machine (SVM) framework invariances given by some a priori known transformations under which the data should be invariant. It extends some previous work which was only applicable with linear SVMs and we show on a digit recognition task that the proposed approach is superior to the traditional Virtual Support Vector method.

PostScript [BibTex]

PostScript [BibTex]


no image
On Unsupervised Learning of Mixtures of Markov Sources

Seldin, Y.

Biologische Kybernetik, The Hebrew University of Jerusalem, Israel, 2001 (diplomathesis)

PDF [BibTex]

PDF [BibTex]


no image
Bound on the Leave-One-Out Error for Density Support Estimation using nu-SVMs

Gretton, A., Herbrich, R., Schölkopf, B., Smola, A., Rayner, P.

University of Cambridge, 2001 (techreport)

[BibTex]

[BibTex]


no image
Bound on the Leave-One-Out Error for 2-Class Classification using nu-SVMs

Gretton, A., Herbrich, R., Schölkopf, B., Rayner, P.

University of Cambridge, 2001, Updated May 2003 (literature review expanded) (techreport)

Abstract
Three estimates of the leave-one-out error for $nu$-support vector (SV) machine binary classifiers are presented. Two of the estimates are based on the geometrical concept of the {em span}, which was introduced in the context of bounding the leave-one-out error for $C$-SV machine binary classifiers, while the third is based on optimisation over the criterion used to train the $nu$-support vector classifier. It is shown that the estimates presented herein provide informative and efficient approximations of the generalisation behaviour, in both a toy example and benchmark data sets. The proof strategies in the $nu$-SV context are also compared with those used to derive leave-one-out error estimates in the $C$-SV case.

PostScript [BibTex]

PostScript [BibTex]


no image
Some kernels for structured data

Bartlett, P., Schölkopf, B.

Biowulf Technologies, 2001 (techreport)

[BibTex]

[BibTex]


no image
Support Vector Machines: Theorie und Anwendung auf Prädiktion epileptischer Anfälle auf der Basis von EEG-Daten

Lal, TN.

Biologische Kybernetik, Institut für Angewandte Mathematik, Universität Bonn, 2001, Advised by Prof. Dr. S. Albeverio (diplomathesis)

ZIP [BibTex]

ZIP [BibTex]


no image
Inference Principles and Model Selection

Buhmann, J., Schölkopf, B.

(01301), Dagstuhl Seminar, 2001 (techreport)

Web [BibTex]

Web [BibTex]