Header logo is ei


2018


no image
A virtual reality environment for experiments in assistive robotics and neural interfaces

Bustamante, S.

Graduate School of Neural Information Processing, Eberhard Karls Universität Tübingen, Germany, 2018 (mastersthesis)

PDF [BibTex]

2018

PDF [BibTex]


no image
Optimal Trajectory Generation and Learning Control for Robot Table Tennis

Koc, O.

Technical University Darmstadt, Germany, 2018 (phdthesis)

[BibTex]

[BibTex]


no image
Distribution-Dissimilarities in Machine Learning

Simon-Gabriel, C. J.

Eberhard Karls Universität Tübingen, Germany, 2018 (phdthesis)

[BibTex]

[BibTex]


no image
Domain Adaptation Under Causal Assumptions

Lechner, T.

Eberhard Karls Universität Tübingen, Germany, 2018 (mastersthesis)

[BibTex]

[BibTex]


no image
Probabilistic Approaches to Stochastic Optimization

Mahsereci, M.

Eberhard Karls Universität Tübingen, Germany, 2018 (phdthesis)

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Reinforcement Learning for High-Speed Robotics with Muscular Actuation

Guist, S.

Ruprecht-Karls-Universität Heidelberg , 2018 (mastersthesis)

[BibTex]

[BibTex]


no image
Probabilistic Ordinary Differential Equation Solvers — Theory and Applications

Schober, M.

Eberhard Karls Universität Tübingen, Germany, 2018 (phdthesis)

[BibTex]

[BibTex]


no image
A machine learning approach to taking EEG-based computer interfaces out of the lab

Jayaram, V.

Graduate Training Centre of Neuroscience, IMPRS, Eberhard Karls Universität Tübingen, Germany, 2018 (phdthesis)

[BibTex]

[BibTex]

2007


no image
Some Theoretical Aspects of Human Categorization Behavior: Similarity and Generalization

Jäkel, F.

Biologische Kybernetik, Eberhard-Karls-Universität Tübingen, Tübingen, Germany, November 2007, passed with "ausgezeichnet", summa cum laude, published online (phdthesis)

PDF [BibTex]

2007

PDF [BibTex]


no image
Statistical Learning Theory Approaches to Clustering

Jegelka, S.

Biologische Kybernetik, Eberhard-Karls-Universität Tübingen, Tübingen, Germany, November 2007 (diplomathesis)

PDF [BibTex]

PDF [BibTex]


no image
Bayesian Estimators for Robins-Ritov’s Problem

Harmeling, S., Toussaint, M.

(EDI-INF-RR-1189), School of Informatics, University of Edinburgh, October 2007 (techreport)

Abstract
Bayesian or likelihood-based approaches to data analysis became very popular in the field of Machine Learning. However, there exist theoretical results which question the general applicability of such approaches; among those a result by Robins and Ritov which introduce a specific example for which they prove that a likelihood-based estimator will fail (i.e. it does for certain cases not converge to a true parameter estimate, even given infinite data). In this paper we consider various approaches to formulate likelihood-based estimators in this example, basically by considering various extensions of the presumed generative model of the data. We can derive estimators which are very similar to the classical Horvitz-Thompson and which also account for a priori knowledge of an observation probability function.

PDF [BibTex]

PDF [BibTex]


no image
Learning with Transformation Invariant Kernels

Walder, C., Chapelle, O.

(165), Max Planck Institute for Biological Cybernetics, Tübingen, Germany, September 2007 (techreport)

Abstract
Abstract. This paper considers kernels invariant to translation, rotation and dilation. We show that no non-trivial positive definite (p.d.) kernels exist which are radial and dilation invariant, only conditionally positive definite (c.p.d.) ones. Accordingly, we discuss the c.p.d. case and provide some novel analysis, including an elementary derivation of a c.p.d. representer theorem. On the practical side, we give a support vector machine (s.v.m.) algorithm for arbitrary c.p.d. kernels. For the thin-plate kernel this leads to a classifier with only one parameter (the amount of regularisation), which we demonstrate to be as effective as an s.v.m. with the Gaussian kernel, even though the Gaussian involves a second parameter (the length scale).

PDF [BibTex]

PDF [BibTex]


no image
Scalable Semidefinite Programming using Convex Perturbations

Kulis, B., Sra, S., Jegelka, S.

(TR-07-47), University of Texas, Austin, TX, USA, September 2007 (techreport)

Abstract
Several important machine learning problems can be modeled and solved via semidefinite programs. Often, researchers invoke off-the-shelf software for the associated optimization, which can be inappropriate for many applications due to computational and storage requirements. In this paper, we introduce the use of convex perturbations for semidefinite programs (SDPs). Using a particular perturbation function, we arrive at an algorithm for SDPs that has several advantages over existing techniques: a) it is simple, requiring only a few lines of MATLAB, b) it is a first-order method which makes it scalable, c) it can easily exploit the structure of a particular SDP to gain efficiency (e.g., when the constraint matrices are low-rank). We demonstrate on several machine learning applications that the proposed algorithm is effective in finding fast approximations to large-scale SDPs.

PDF [BibTex]

PDF [BibTex]


no image
Sparse Multiscale Gaussian Process Regression

Walder, C., Kim, K., Schölkopf, B.

(162), Max Planck Institute for Biological Cybernetics, Tübingen, Germany, August 2007 (techreport)

Abstract
Most existing sparse Gaussian process (g.p.) models seek computational advantages by basing their computations on a set of m basis functions that are the covariance function of the g.p. with one of its two inputs fixed. We generalise this for the case of Gaussian covariance function, by basing our computations on m Gaussian basis functions with arbitrary diagonal covariance matrices (or length scales). For a fixed number of basis functions and any given criteria, this additional flexibility permits approximations no worse and typically better than was previously possible. Although we focus on g.p. regression, the central idea is applicable to all kernel based algorithms, such as the support vector machine. We perform gradient based optimisation of the marginal likelihood, which costs O(m2n) time where n is the number of data points, and compare the method to various other sparse g.p. methods. Our approach outperforms the other methods, particularly for the case of very few basis functions, i.e. a very high sparsity ratio.

PDF [BibTex]

PDF [BibTex]


no image
Efficient Subwindow Search for Object Localization

Blaschko, M., Hofmann, T., Lampert, C.

(164), Max Planck Institute for Biological Cybernetics, Tübingen, Germany, August 2007 (techreport)

Abstract
Recent years have seen huge advances in object recognition from images. Recognition rates beyond 95% are the rule rather than the exception on many datasets. However, most state-of-the-art methods can only decide if an object is present or not. They are not able to provide information on the object location or extent within in the image. We report on a simple yet powerful scheme that extends many existing recognition methods to also perform localization of object bounding boxes. This is achieved by maximizing the classification score over all possible subrectangles in the image. Despite the impression that this would be computationally intractable, we show that in many situations efficient algorithms exist which solve a generalized maximum subrectangle problem. We show how our method is applicable to a variety object detection frameworks and demonstrate its performance by applying it to the popular bag of visual words model, achieving competitive results on the PASCAL VOC 2006 dataset.

PDF [BibTex]

PDF [BibTex]


no image
Error Correcting Codes for the P300 Visual Speller

Biessmann, F.

Biologische Kybernetik, Eberhard-Karls-Universität Tübingen, Tübingen, Germany, July 2007 (diplomathesis)

Abstract
The aim of brain-computer interface (BCI) research is to establish a communication system based on intentional modulation of brain activity. This is accomplished by classifying patterns of brain ac- tivity, volitionally induced by the user. The BCI presented in this study is based on a classical paradigm as proposed by (Farwell and Donchin, 1988), the P300 visual speller. Recording electroencephalo- grams (EEG) from the scalp while presenting letters successively to the user, the speller can infer from the brain signal which letter the user was focussing on. Since EEG recordings are noisy, usually many repetitions are needed to detect the correct letter. The focus of this study was to improve the accuracy of the visual speller applying some basic principles from information theory: Stimulus sequences of the speller have been modified into error-correcting codes. Additionally a language model was incorporated into the probabilistic letter de- coder. Classification of single EEG epochs was less accurate using error correcting codes. However, the novel code could compensate for that such that overall, letter accuracies were as high as or even higher than for classical stimulus codes. In particular at high noise levels, error-correcting decoding achieved higher letter accuracies.

PDF [BibTex]

PDF [BibTex]


no image
Data-driven goodness-of-fit tests

Langovoy, MA.

Biologische Kybernetik, Georg-August-Universität Göttingen, Göttingen, Germany, July 2007 (phdthesis)

Web [BibTex]

Web [BibTex]


no image
Cluster Identification in Nearest-Neighbor Graphs

Maier, M., Hein, M., von Luxburg, U.

(163), Max-Planck-Institute for Biological Cybernetics, Tübingen, Germany, May 2007 (techreport)

Abstract
Assume we are given a sample of points from some underlying distribution which contains several distinct clusters. Our goal is to construct a neighborhood graph on the sample points such that clusters are ``identified‘‘: that is, the subgraph induced by points from the same cluster is connected, while subgraphs corresponding to different clusters are not connected to each other. We derive bounds on the probability that cluster identification is successful, and use them to predict ``optimal‘‘ values of k for the mutual and symmetric k-nearest-neighbor graphs. We point out different properties of the mutual and symmetric nearest-neighbor graphs related to the cluster identification problem.

PDF [BibTex]

PDF [BibTex]


no image
Nonparametric Bayesian Discrete Latent Variable Models for Unsupervised Learning

Görür, D.

Biologische Kybernetik, Technische Universität Berlin, Berlin, Germany, April 2007, published online (phdthesis)

PDF PDF [BibTex]

PDF PDF [BibTex]


no image
Exploring model selection techniques for nonlinear dimensionality reduction

Harmeling, S.

(EDI-INF-RR-0960), School of Informatics, University of Edinburgh, March 2007 (techreport)

Abstract
Nonlinear dimensionality reduction (NLDR) methods have become useful tools for practitioners who are faced with the analysis of high-dimensional data. Of course, not all NLDR methods are equally applicable to a particular dataset at hand. Thus it would be useful to come up with model selection criteria that help to choose among different NLDR algorithms. This paper explores various approaches to this problem and evaluates them on controlled data sets. Comprehensive experiments will show that model selection scores based on stability are not useful, while scores based on Gaussian processes are helpful for the NLDR problem.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Dirichlet Mixtures of Bayesian Linear Gaussian State-Space Models: a Variational Approach

Chiappa, S., Barber, D.

(161), Max Planck Institute for Biological Cybernetics, Tübingen, Germany, March 2007 (techreport)

Abstract
We describe two related models to cluster multidimensional time-series under the assumption of an underlying linear Gaussian dynamical process. In the first model, times-series are assigned to the same cluster when they show global similarity in their dynamics, while in the second model times-series are assigned to the same cluster when they show simultaneous similarity. Both models are based on Dirichlet Mixtures of Bayesian Linear Gaussian State-Space Models in order to (semi) automatically determine an appropriate number of components in the mixture, and to additionally bias the components to a parsimonious parameterization. The resulting models are formally intractable and to deal with this we describe a deterministic approximation based on a novel implementation of Variational Bayes.

PDF [BibTex]

PDF [BibTex]


no image
Applications of Kernel Machines to Structured Data

Eichhorn, J.

Biologische Kybernetik, Technische Universität Berlin, Berlin, Germany, March 2007, passed with "sehr gut", published online (phdthesis)

PDF [BibTex]

PDF [BibTex]


no image
A priori Knowledge from Non-Examples

Sinz, FH.

Biologische Kybernetik, Eberhard-Karls-Universität Tübingen, Tübingen, Germany, March 2007 (diplomathesis)

PDF Web [BibTex]

PDF Web [BibTex]


no image
Modeling data using directional distributions: Part II

Sra, S., Jain, P., Dhillon, I.

(TR-07-05), University of Texas, Austin, TX, USA, February 2007 (techreport)

Abstract
High-dimensional data is central to most data mining applications, and only recently has it been modeled via directional distributions. In [Banerjee et al., 2003] the authors introduced the use of the von Mises-Fisher (vMF) distribution for modeling high-dimensional directional data, particularly for text and gene expression analysis. The vMF distribution is one of the simplest directional distributions. TheWatson, Bingham, and Fisher-Bingham distributions provide distri- butions with an increasing number of parameters and thereby commensurately increased modeling power. This report provides a followup study to the initial development in [Banerjee et al., 2003] by presenting Expectation Maximization (EM) procedures for estimating parameters of a mixture of Watson (moW) distributions. The numerical challenges associated with parameter estimation for both of these distributions are significantly more difficult than for the vMF distribution. We develop new numerical approximations for estimating the parameters permitting us to model real- life data more accurately. Our experimental results establish that for certain data sets improved modeling power translates into better results.

PDF [BibTex]

PDF [BibTex]


no image
Automatic 3D Face Reconstruction from Single Images or Video

Breuer, P., Kim, K., Kienzle, W., Blanz, V., Schölkopf, B.

(160), Max Planck Institute for Biological Cybernetics, Tübingen, Germany, February 2007 (techreport)

Abstract
This paper presents a fully automated algorithm for reconstructing a textured 3D model of a face from a single photograph or a raw video stream. The algorithm is based on a combination of Support Vector Machines (SVMs) and a Morphable Model of 3D faces. After SVM face detection, individual facial features are detected using a novel regression-and classification-based approach, and probabilistically plausible configurations of features are selected to produce a list of candidates for several facial feature positions. In the next step, the configurations of feature points are evaluated using a novel criterion that is based on a Morphable Model and a combination of linear projections. Finally, the feature points initialize a model-fitting procedure of the Morphable Model. The result is a high-resolution 3D surface model.

PDF [BibTex]

PDF [BibTex]


no image
Machine Learning for Mass Production and Industrial Engineering

Pfingsten, T.

Biologische Kybernetik, Eberhard-Karls-Universität Tübingen, Tübingen, Germany, February 2007 (phdthesis)

PDF [BibTex]

PDF [BibTex]


no image
Development of a Brain-Computer Interface Approach Based on Covert Attention to Tactile Stimuli

Raths, C.

University of Tübingen, Germany, University of Tübingen, Germany, January 2007 (diplomathesis)

[BibTex]

[BibTex]


no image
A Machine Learning Approach for Estimating the Attenuation Map for a Combined PET/MR Scanner

Hofmann, M.

Biologische Kybernetik, Max-Planck Institute for Biological Cybernetics, Tübingen, Germany, 2007 (diplomathesis)

[BibTex]

[BibTex]


no image
Machine Learning of Motor Skills for Robotics

Peters, J.

University of Southern California, Los Angeles, CA, USA, University of Southern California, Los Angeles, CA, USA, 2007, clmc (phdthesis)

Abstract
Autonomous robots that can assist humans in situations of daily life have been a long standing vision of robotics, artificial intelligence, and cognitive sciences. A first step towards this goal is to create robots that can accomplish a multitude of different tasks, triggered by environmental context or higher level instruction. Early approaches to this goal during the heydays of artificial intelligence research in the late 1980s, however, made it clear that an approach purely based on reasoning and human insights would not be able to model all the perceptuomotor tasks that a robot should fulfill. Instead, new hope was put in the growing wake of machine learning that promised fully adaptive control algorithms which learn both by observation and trial-and-error. However, to date, learning techniques have yet to fulfill this promise as only few methods manage to scale into the high-dimensional domains of manipulator robotics, or even the new upcoming trend of humanoid robotics, and usually scaling was only achieved in precisely pre-structured domains. In this thesis, we investigate the ingredients for a general approach to motor skill learning in order to get one step closer towards human-like performance. For doing so, we study two major components for such an approach, i.e., firstly, a theoretically well-founded general approach to representing the required control structures for task representation and execution and, secondly, appropriate learning algorithms which can be applied in this setting. As a theoretical foundation, we first study a general framework to generate control laws for real robots with a particular focus on skills represented as dynamical systems in differential constraint form. We present a point-wise optimal control framework resulting from a generalization of Gauss' principle and show how various well-known robot control laws can be derived by modifying the metric of the employed cost function. The framework has been successfully applied to task space tracking control for holonomic systems for several different metrics on the anthropomorphic SARCOS Master Arm. In order to overcome the limiting requirement of accurate robot models, we first employ learning methods to find learning controllers for task space control. However, when learning to execute a redundant control problem, we face the general problem of the non-convexity of the solution space which can force the robot to steer into physically impossible configurations if supervised learning methods are employed without further consideration. This problem can be resolved using two major insights, i.e., the learning problem can be treated as locally convex and the cost function of the analytical framework can be used to ensure global consistency. Thus, we derive an immediate reinforcement learning algorithm from the expectation-maximization point of view which leads to a reward-weighted regression technique. This method can be used both for operational space control as well as general immediate reward reinforcement learning problems. We demonstrate the feasibility of the resulting framework on the problem of redundant end-effector tracking for both a simulated 3 degrees of freedom robot arm as well as for a simulated anthropomorphic SARCOS Master Arm. While learning to execute tasks in task space is an essential component to a general framework to motor skill learning, learning the actual task is of even higher importance, particularly as this issue is more frequently beyond the abilities of analytical approaches than execution. We focus on the learning of elemental tasks which can serve as the "building blocks of movement generation", called motor primitives. Motor primitives are parameterized task representations based on splines or nonlinear differential equations with desired attractor properties. While imitation learning of parameterized motor primitives is a relatively well-understood problem, the self-improvement by interaction of the system with the environment remains a challenging problem, tackled in the fourth chapter of this thesis. For pursuing this goal, we highlight the difficulties with current reinforcement learning methods, and outline both established and novel algorithms for the gradient-based improvement of parameterized policies. We compare these algorithms in the context of motor primitive learning, and show that our most modern algorithm, the Episodic Natural Actor-Critic outperforms previous algorithms by at least an order of magnitude. We demonstrate the efficiency of this reinforcement learning method in the application of learning to hit a baseball with an anthropomorphic robot arm. In conclusion, in this thesis, we have contributed a general framework for analytically computing robot control laws which can be used for deriving various previous control approaches and serves as foundation as well as inspiration for our learning algorithms. We have introduced two classes of novel reinforcement learning methods, i.e., the Natural Actor-Critic and the Reward-Weighted Regression algorithm. These algorithms have been used in order to replace the analytical components of the theoretical framework by learned representations. Evaluations have been performed on both simulated and real robot arms.

[BibTex]

[BibTex]


no image
Relative Entropy Policy Search

Peters, J.

CLMC Technical Report: TR-CLMC-2007-2, Computational Learning and Motor Control Lab, Los Angeles, CA, 2007, clmc (techreport)

Abstract
This technical report describes a cute idea of how to create new policy search approaches. It directly relates to the Natural Actor-Critic methods but allows the derivation of one shot solutions. Future work may include the application to interesting problems.

PDF link (url) [BibTex]

PDF link (url) [BibTex]

2005


no image
Popper, Falsification and the VC-dimension

Corfield, D., Schölkopf, B., Vapnik, V.

(145), Max Planck Institute for Biological Cybernetics, November 2005 (techreport)

PDF [BibTex]

2005

PDF [BibTex]


no image
Extension to Kernel Dependency Estimation with Applications to Robotics

BakIr, G.

Biologische Kybernetik, Technische Universität Berlin, Berlin, November 2005 (phdthesis)

Abstract
Kernel Dependency Estimation(KDE) is a novel technique which was designed to learn mappings between sets without making assumptions on the type of the involved input and output data. It learns the mapping in two stages. In a first step, it tries to estimate coordinates of a feature space representation of elements of the set by solving a high dimensional multivariate regression problem in feature space. Following this, it tries to reconstruct the original representation given the estimated coordinates. This thesis introduces various algorithmic extensions to both stages in KDE. One of the contributions of this thesis is to propose a novel linear regression algorithm that explores low-dimensional subspaces during learning. Furthermore various existing strategies for reconstructing patterns from feature maps involved in KDE are discussed and novel pre-image techniques are introduced. In particular, pre-image techniques for data-types that are of discrete nature such as graphs and strings are investigated. KDE is then explored in the context of robot pose imitation where the input is a an image with a human operator and the output is the robot articulated variables. Thus, using KDE, robot pose imitation is formulated as a regression problem.

PDF PDF [BibTex]

PDF PDF [BibTex]


no image
Geometrical aspects of statistical learning theory

Hein, M.

Biologische Kybernetik, Darmstadt, Darmstadt, November 2005 (phdthesis)

PDF [BibTex]

PDF [BibTex]


no image
Implicit Surfaces For Modelling Human Heads

Steinke, F.

Biologische Kybernetik, Eberhard-Karls-Universität, Tübingen, September 2005 (diplomathesis)

[BibTex]

[BibTex]


no image
Machine Learning Methods for Brain-Computer Interdaces

Lal, TN.

Biologische Kybernetik, University of Darmstadt, September 2005 (phdthesis)

Web [BibTex]

Web [BibTex]


no image
A Combinatorial View of Graph Laplacians

Huang, J.

(144), Max Planck Institute for Biological Cybernetics, Tübingen, Germany, August 2005 (techreport)

Abstract
Discussions about different graph Laplacian, mainly normalized and unnormalized versions of graph Laplacian, have been ardent with respect to various methods in clustering and graph based semi-supervised learning. Previous research on graph Laplacians investigated their convergence properties to Laplacian operators on continuous manifolds. There is still no strong proof on convergence for the normalized Laplacian. In this paper, we analyze different variants of graph Laplacians directly from the ways solving the original graph partitioning problem. The graph partitioning problem is a well-known combinatorial NP hard optimization problem. The spectral solutions provide evidence that normalized Laplacian encodes more reasonable considerations for graph partitioning. We also provide some examples to show their differences.

[BibTex]

[BibTex]


no image
Beyond Pairwise Classification and Clustering Using Hypergraphs

Zhou, D., Huang, J., Schölkopf, B.

(143), Max Planck Institute for Biological Cybernetics, August 2005 (techreport)

Abstract
In many applications, relationships among objects of interest are more complex than pairwise. Simply approximating complex relationships as pairwise ones can lead to loss of information. An alternative for these applications is to analyze complex relationships among data directly, without the need to first represent the complex relationships into pairwise ones. A natural way to describe complex relationships is to use hypergraphs. A hypergraph is a graph in which edges can connect more than two vertices. Thus we consider learning from a hypergraph, and develop a general framework which is applicable to classification and clustering for complex relational data. We have applied our framework to real-world web classification problems and obtained encouraging results.

PDF [BibTex]

PDF [BibTex]


no image
Liver Perfusion using Level Set Methods

Nowozin, S.

Biologische Kybernetik, Shanghai JiaoTong University, Shanghai, China, July 2005 (diplomathesis)

PDF [BibTex]

PDF [BibTex]


no image
Generalized Nonnegative Matrix Approximations using Bregman Divergences

Sra, S., Dhillon, I.

Univ. of Texas at Austin, June 2005 (techreport)

[BibTex]

[BibTex]


no image
Measuring Statistical Dependence with Hilbert-Schmidt Norms

Gretton, A., Bousquet, O., Smola, A., Schölkopf, B.

(140), Max Planck Institute for Biological Cybernetics, Tübingen, Germany, June 2005 (techreport)

Abstract
We propose an independence criterion based on the eigenspectrum of covariance operators in reproducing kernel Hilbert spaces (RKHSs), consisting of an empirical estimate of the Hilbert-Schmidt norm of the cross-covariance operator (we term this a Hilbert-Schmidt Independence Criterion, or HSIC). This approach has several advantages, compared with previous kernel-based independence criteria. First, the empirical estimate is simpler than any other kernel dependence test, and requires no user-defined regularisation. Second, there is a clearly defined population quantity which the empirical estimate approaches in the large sample limit, with exponential convergence guaranteed between the two: this ensures that independence tests based on HSIC do not suffer from slow learning rates. Finally, we show in the context of independent component analysis (ICA) that the performance of HSIC is competitive with that of previously published kernel-based criteria, and of other recently published ICA methods.

PDF [BibTex]

PDF [BibTex]


no image
Consistency of Kernel Canonical Correlation Analysis

Fukumizu, K., Bach, F., Gretton, A.

(942), Institute of Statistical Mathematics, 4-6-7 Minami-azabu, Minato-ku, Tokyo 106-8569 Japan, June 2005 (techreport)

PDF [BibTex]

PDF [BibTex]


no image
Discriminative Methods for Label Sequence Learning

Altun, Y.

Brown University, Providence, RI, USA, May 2005 (phdthesis)

PDF [BibTex]

PDF [BibTex]


no image
Efficient Adaptive Sampling of the Psychometric Function by Maximizing Information Gain

Tanner, TG.

Biologische Kybernetik, Eberhard-Karls University Tübingen, Tübingen, Germany, May 2005 (diplomathesis)

Abstract
A common task in psychophysics is to measure the psychometric function. A psychometric function can be described by its shape and four parameters: offset or threshold, slope or width, false alarm rate or chance level and miss or lapse rate. Depending on the parameters of interest some points on the psychometric function may be more informative than others. Adaptive methods attempt to place trials on the most informative points based on the data collected in previous trials. A new Bayesian adaptive psychometric method placing trials by minimising the expected entropy of the posterior probabilty dis- tribution over a set of possible stimuli is introduced. The method is more flexible, faster and at least as efficient as the established method (Kontsevich and Tyler, 1999). Comparably accurate (2dB) threshold and slope estimates can be obtained after about 30 and 500 trials, respectively. By using a dynamic termination criterion the efficiency can be further improved. The method can be applied to all experimental designs including yes/no designs and allows acquisition of any set of free parameters. By weighting the importance of parameters one can include nuisance parameters and adjust the relative expected errors. Use of nuisance parameters may lead to more accurate estimates than assuming a guessed fixed value. Block designs are supported and do not harm the performance if a sufficient number of trials are performed. The method was evaluated by computer simulations in which the role of parametric assumptions, its robustness, the quality of different point estimates, the effect of dynamic termination criteria and many other settings were investigated.

[BibTex]

[BibTex]


no image
Support Vector Classification of Images with Local Features

Blaschko, MB.

Biologische Kybernetik, University of Massachusetts, Amherst, May 2005 (diplomathesis)

[BibTex]

[BibTex]


no image
Efficient Pattern Selection for Support Vector Classifiers and its CRM Application

Shin, H.

Biologische Kybernetik, Seoul National University, Seoul, Korea, February 2005 (phdthesis)

PDF [BibTex]

PDF [BibTex]


no image
Kernels: Regularization and Optimization

Ong, CS.

Biologische Kybernetik, The Australian National University, Canberra, Australia, 2005 (phdthesis)

PDF GZIP [BibTex]

PDF GZIP [BibTex]


no image
Approximate Inference for Robust Gaussian Process Regression

Kuss, M., Pfingsten, T., Csato, L., Rasmussen, C.

(136), Max Planck Institute for Biological Cybernetics, Tübingen, Germany, 2005 (techreport)

Abstract
Gaussian process (GP) priors have been successfully used in non-parametric Bayesian regression and classification models. Inference can be performed analytically only for the regression model with Gaussian noise. For all other likelihood models inference is intractable and various approximation techniques have been proposed. In recent years expectation-propagation (EP) has been developed as a general method for approximate inference. This article provides a general summary of how expectation-propagation can be used for approximate inference in Gaussian process models. Furthermore we present a case study describing its implementation for a new robust variant of Gaussian process regression. To gain further insights into the quality of the EP approximation we present experiments in which we compare to results obtained by Markov chain Monte Carlo (MCMC) sampling.

PDF [BibTex]

PDF [BibTex]