Header logo is ei


2019


no image
Robot Learning for Muscular Systems

Büchler, D.

Technical University Darmstadt, Germany, December 2019 (phdthesis)

[BibTex]

2019

[BibTex]


no image
Real Time Probabilistic Models for Robot Trajectories

Gomez-Gonzalez, S.

Technical University Darmstadt, Germany, December 2019 (phdthesis)

[BibTex]

[BibTex]


no image
Reinforcement Learning for a Two-Robot Table Tennis Simulation

Li, G.

RWTH Aachen University, Germany, July 2019 (mastersthesis)

[BibTex]

[BibTex]


no image
Learning Transferable Representations

Rojas-Carulla, M.

University of Cambridge, UK, 2019 (phdthesis)

[BibTex]

[BibTex]


no image
Sample-efficient deep reinforcement learning for continuous control

Gu, S.

University of Cambridge, UK, 2019 (phdthesis)

[BibTex]


no image
Spatial Filtering based on Riemannian Manifold for Brain-Computer Interfacing

Xu, J.

Technical University of Munich, Germany, 2019 (mastersthesis)

[BibTex]

[BibTex]


no image
Quantification of tumor heterogeneity using PET/MRI and machine learning

Katiyar, P.

Eberhard Karls Universität Tübingen, Germany, 2019 (phdthesis)

[BibTex]

[BibTex]

2011


no image
Optimization for Machine Learning

Sra, S., Nowozin, S., Wright, S.

pages: 494, Neural information processing series, MIT Press, Cambridge, MA, USA, December 2011 (book)

Abstract
The interplay between optimization and machine learning is one of the most important developments in modern computational science. Optimization formulations and methods are proving to be vital in designing algorithms to extract essential knowledge from huge volumes of data. Machine learning, however, is not simply a consumer of optimization technology but a rapidly evolving field that is itself generating new optimization ideas. This book captures the state of the art of the interaction between optimization and machine learning in a way that is accessible to researchers in both fields. Optimization approaches have enjoyed prominence in machine learning because of their wide applicability and attractive theoretical properties. The increasing complexity, size, and variety of today's machine learning models call for the reassessment of existing assumptions. This book starts the process of reassessment. It describes the resurgence in novel contexts of established frameworks such as first-order methods, stochastic approximations, convex relaxations, interior-point methods, and proximal methods. It also devotes attention to newer themes such as regularized optimization, robust optimization, gradient and subgradient methods, splitting techniques, and second-order methods. Many of these techniques draw inspiration from other fields, including operations research, theoretical computer science, and subfields of optimization. The book will enrich the ongoing cross-fertilization between the machine learning community and these other fields, and within the broader optimization community.

Web [BibTex]

2011

Web [BibTex]


no image
Bayesian Time Series Models

Barber, D., Cemgil, A., Chiappa, S.

pages: 432, Cambridge University Press, Cambridge, UK, August 2011 (book)

[BibTex]

[BibTex]


no image
Crowdsourcing for optimisation of deconvolution methods via an iPhone application

Lang, A.

Hochschule Reutlingen, Germany, April 2011 (mastersthesis)

[BibTex]


no image
Learning functions with kernel methods

Dinuzzo, F.

University of Pavia, Italy, January 2011 (phdthesis)

PDF [BibTex]

PDF [BibTex]


no image
Handbook of Statistical Bioinformatics

Lu, H., Schölkopf, B., Zhao, H.

pages: 627, Springer Handbooks of Computational Statistics, Springer, Berlin, Germany, 2011 (book)

Web DOI [BibTex]

Web DOI [BibTex]


no image
Model Learning in Robot Control

Nguyen-Tuong, D.

Albert-Ludwigs-Universität Freiburg, Germany, 2011 (phdthesis)

[BibTex]

[BibTex]

2010


no image
Approximate Inference in Graphical Models

Hennig, P.

University of Cambridge, November 2010 (phdthesis)

Web [BibTex]

2010

Web [BibTex]


no image
Bayesian Inference and Experimental Design for Large Generalised Linear Models

Nickisch, H.

Biologische Kybernetik, Technische Universität Berlin, Berlin, Germany, September 2010 (phdthesis)

PDF Web [BibTex]

PDF Web [BibTex]


no image
Method and device for recovering a digital image from a sequence of observed digital images

Harmeling, S., Hirsch, M., Sra, S., Schölkopf, B.

United States Provisional Patent Application, No 61387025, September 2010 (patent)

[BibTex]


no image
Method for feature selection in a support vector machine using feature ranking

Weston, J., Elisseeff, A., Schölkopf, B., Pérez-Cruz, F., Guyon, I.

United States Patent, No 7805388, September 2010 (patent)

[BibTex]

[BibTex]


no image
Inferring High-Dimensional Causal Relations using Free Probability Theory

Zscheischler, J.

Humboldt Universität Berlin, Germany, August 2010 (diplomathesis)

PDF [BibTex]

PDF [BibTex]


no image
Kernels and methods for selecting kernels for use in learning machines

Bartlett, P. L., Elisseeff, A., Schölkopf, B., Chapelle, O.

United States Patent, No 7788193, August 2010 (patent)

[BibTex]

[BibTex]


no image
Predictive Representations For Sequential Decision Making Under Uncertainty

Boularias, A.

Université Laval, Quebec, Canada, July 2010 (phdthesis)

Abstract
The problem of making decisions is ubiquitous in life. This problem becomes even more complex when the decisions should be made sequentially. In fact, the execution of an action at a given time leads to a change in the environment of the problem, and this change cannot be predicted with certainty. The aim of a decision-making process is to optimally select actions in an uncertain environment. To this end, the environment is often modeled as a dynamical system with multiple states, and the actions are executed so that the system evolves toward a desirable state. In this thesis, we proposed a family of stochastic models and algorithms in order to improve the quality of of the decision-making process. The proposed models are alternative to Markov Decision Processes, a largely used framework for this type of problems. In particular, we showed that the state of a dynamical system can be represented more compactly if it is described in terms of predictions of certain future events. We also showed that even the cognitive process of selecting actions, known as policy, can be seen as a dynamical system. Starting from this observation, we proposed a panoply of algorithms, all based on predictive policy representations, in order to solve different problems of decision-making, such as decentralized planning, reinforcement learning, or imitation learning. We also analytically and empirically demonstrated that the proposed approaches lead to a decrease in the computational complexity and an increase in the quality of the decisions, compared to standard approaches for planning and learning under uncertainty.

PDF [BibTex]


no image
Semi-supervised Subspace Learning and Application to Human Functional Magnetic Brain Resonance Imaging Data

Shelton, J.

Biologische Kybernetik, Eberhard Karls Universität, Tübingen, Germany, July 2010 (diplomathesis)

PDF [BibTex]

PDF [BibTex]


no image
Quantitative Evaluation of MR-based Attenuation Correction for Positron Emission Tomography (PET)

Mantlik, F.

Biologische Kybernetik, Universität Mannheim, Germany, March 2010 (diplomathesis)

[BibTex]

[BibTex]


no image
From Motor Learning to Interaction Learning in Robots

Sigaud, O., Peters, J.

pages: 538, Studies in Computational Intelligence ; 264, (Editors: O Sigaud, J Peters), Springer, Berlin, Germany, January 2010 (book)

Abstract
From an engineering standpoint, the increasing complexity of robotic systems and the increasing demand for more autonomously learning robots, has become essential. This book is largely based on the successful workshop "From motor to interaction learning in robots" held at the IEEE/RSJ International Conference on Intelligent Robot Systems. The major aim of the book is to give students interested the topics described above a chance to get started faster and researchers a helpful compandium.

Web DOI [BibTex]

Web DOI [BibTex]


no image
Finding Gene-Gene Interactions using Support Vector Machines

Rakitsch, B.

Eberhard Karls Universität Tübingen, Germany, 2010 (diplomathesis)

[BibTex]

[BibTex]


no image
Structural and Relational Data Mining for Systems Biology Applications

Georgii, E.

Eberhard Karls Universität Tübingen, Germany , 2010 (phdthesis)

Web [BibTex]

Web [BibTex]


no image
Population Coding in the Visual System: Statistical Methods and Theory

Macke, J.

Eberhard Karls Universität Tübingen, Germany, 2010 (phdthesis)

[BibTex]

[BibTex]


no image
Bayesian Methods for Neural Data Analysis

Gerwinn, S.

Eberhard Karls Universität Tübingen, Germany, 2010 (phdthesis)

Web [BibTex]

Web [BibTex]


no image
Clustering with Neighborhood Graphs

Maier, M.

Universität des Saarlandes, Saarbrücken, Germany, 2010 (phdthesis)

Web [BibTex]

Web [BibTex]


no image
Detecting and modeling time shifts in microarray time series data applying Gaussian processes

Zwießele, M.

Eberhard Karls Universität Tübingen, Germany, 2010 (thesis)

[BibTex]

[BibTex]


no image
Detecting the mincut in sparse random graphs

Köhler, R.

Eberhard Karls Universität Tübingen, Germany, 2010 (diplomathesis)

[BibTex]

[BibTex]


no image
A wider view on encoding and decoding in the visual brain-computer interface speller system

Martens, S.

Eberhard Karls Universität Tübingen, Germany, 2010 (phdthesis)

[BibTex]

2002


no image
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond

Schölkopf, B., Smola, A.

pages: 644, Adaptive Computation and Machine Learning, MIT Press, Cambridge, MA, USA, December 2002, Parts of this book, including an introduction to kernel methods, can be downloaded here. (book)

Abstract
In the 1990s, a new type of learning algorithm was developed, based on results from statistical learning theory: the Support Vector Machine (SVM). This gave rise to a new class of theoretically elegant learning machines that use a central concept of SVMs-kernels—for a number of learning tasks. Kernel machines provide a modular framework that can be adapted to different tasks and domains by the choice of the kernel function and the base algorithm. They are replacing neural networks in a variety of fields, including engineering, information retrieval, and bioinformatics. Learning with Kernels provides an introduction to SVMs and related kernel methods. Although the book begins with the basics, it also includes the latest research. It provides all of the concepts necessary to enable a reader equipped with some basic mathematical knowledge to enter the world of machine learning using theoretically well-founded yet easy-to-use kernel algorithms and to understand and apply the powerful algorithms that have been developed over the last few years.

Web [BibTex]

2002

Web [BibTex]


no image
Nonlinear Multivariate Analysis with Geodesic Kernels

Kuss, M.

Biologische Kybernetik, Technische Universität Berlin, February 2002 (diplomathesis)

GZIP [BibTex]

GZIP [BibTex]


no image
Concentration Inequalities and Empirical Processes Theory Applied to the Analysis of Learning Algorithms

Bousquet, O.

Biologische Kybernetik, Ecole Polytechnique, 2002 (phdthesis) Accepted

Abstract
New classification algorithms based on the notion of 'margin' (e.g. Support Vector Machines, Boosting) have recently been developed. The goal of this thesis is to better understand how they work, via a study of their theoretical performance. In order to do this, a general framework for real-valued classification is proposed. In this framework, it appears that the natural tools to use are Concentration Inequalities and Empirical Processes Theory. Thanks to an adaptation of these tools, a new measure of the size of a class of functions is introduced, which can be computed from the data. This allows, on the one hand, to better understand the role of eigenvalues of the kernel matrix in Support Vector Machines, and on the other hand, to obtain empirical model selection criteria.

PostScript [BibTex]


no image
Support Vector Machines: Induction Principle, Adaptive Tuning and Prior Knowledge

Chapelle, O.

Biologische Kybernetik, 2002 (phdthesis)

Abstract
This thesis presents a theoretical and practical study of Support Vector Machines (SVM) and related learning algorithms. In a first part, we introduce a new induction principle from which SVMs can be derived, but some new algorithms are also presented in this framework. In a second part, after studying how to estimate the generalization error of an SVM, we suggest to choose the kernel parameters of an SVM by minimizing this estimate. Several applications such as feature selection are presented. Finally the third part deals with the incoporation of prior knowledge in a learning algorithm and more specifically, we studied the case of known invariant transormations and the use of unlabeled data.

GZIP [BibTex]