Header logo is ei


2011


no image
Projected Newton-type methods in machine learning

Schmidt, M., Kim, D., Sra, S.

In Optimization for Machine Learning, pages: 305-330, (Editors: Sra, S., Nowozin, S. and Wright, S. J.), MIT Press, Cambridge, MA, USA, December 2011 (inbook)

Abstract
We consider projected Newton-type methods for solving large-scale optimization problems arising in machine learning and related fields. We first introduce an algorithmic framework for projected Newton-type methods by reviewing a canonical projected (quasi-)Newton method. This method, while conceptually pleasing, has a high computation cost per iteration. Thus, we discuss two variants that are more scalable, namely, two-metric projection and inexact projection methods. Finally, we show how to apply the Newton-type framework to handle non-smooth objectives. Examples are provided throughout the chapter to illustrate machine learning applications of our framework.

PDF Web [BibTex]

2011

PDF Web [BibTex]


no image
Optimization for Machine Learning

Sra, S., Nowozin, S., Wright, S.

pages: 494, Neural information processing series, MIT Press, Cambridge, MA, USA, December 2011 (book)

Abstract
The interplay between optimization and machine learning is one of the most important developments in modern computational science. Optimization formulations and methods are proving to be vital in designing algorithms to extract essential knowledge from huge volumes of data. Machine learning, however, is not simply a consumer of optimization technology but a rapidly evolving field that is itself generating new optimization ideas. This book captures the state of the art of the interaction between optimization and machine learning in a way that is accessible to researchers in both fields. Optimization approaches have enjoyed prominence in machine learning because of their wide applicability and attractive theoretical properties. The increasing complexity, size, and variety of today's machine learning models call for the reassessment of existing assumptions. This book starts the process of reassessment. It describes the resurgence in novel contexts of established frameworks such as first-order methods, stochastic approximations, convex relaxations, interior-point methods, and proximal methods. It also devotes attention to newer themes such as regularized optimization, robust optimization, gradient and subgradient methods, splitting techniques, and second-order methods. Many of these techniques draw inspiration from other fields, including operations research, theoretical computer science, and subfields of optimization. The book will enrich the ongoing cross-fertilization between the machine learning community and these other fields, and within the broader optimization community.

Web [BibTex]

Web [BibTex]


no image
Bayesian Time Series Models

Barber, D., Cemgil, A., Chiappa, S.

pages: 432, Cambridge University Press, Cambridge, UK, August 2011 (book)

[BibTex]

[BibTex]


no image
Statistical Learning Theory: Models, Concepts, and Results

von Luxburg, U., Schölkopf, B.

In Handbook of the History of Logic, Vol. 10: Inductive Logic, 10, pages: 651-706, (Editors: Gabbay, D. M., Hartmann, S. and Woods, J. H.), Elsevier North Holland, Amsterdam, Netherlands, May 2011 (inbook)

Abstract
Statistical learning theory provides the theoretical basis for many of today's machine learning algorithms and is arguably one of the most beautifully developed branches of artificial intelligence in general. It originated in Russia in the 1960s and gained wide popularity in the 1990s following the development of the so-called Support Vector Machine (SVM), which has become a standard tool for pattern recognition in a variety of domains ranging from computer vision to computational biology. Providing the basis of new learning algorithms, however, was not the only motivation for developing statistical learning theory. It was just as much a philosophical one, attempting to answer the question of what it is that allows us to draw valid conclusions from empirical data. In this article we attempt to give a gentle, non-technical overview over the key ideas and insights of statistical learning theory. We do not assume that the reader has a deep background in mathematics, statistics, or computer science. Given the nature of the subject matter, however, some familiarity with mathematical concepts and notations and some intuitive understanding of basic probability is required. There exist many excellent references to more technical surveys of the mathematics of statistical learning theory: the monographs by one of the founders of statistical learning theory ([Vapnik, 1995], [Vapnik, 1998]), a brief overview over statistical learning theory in Section 5 of [Sch{\"o}lkopf and Smola, 2002], more technical overview papers such as [Bousquet et al., 2003], [Mendelson, 2003], [Boucheron et al., 2005], [Herbrich and Williamson, 2002], and the monograph [Devroye et al., 1996].

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Crowdsourcing for optimisation of deconvolution methods via an iPhone application

Lang, A.

Hochschule Reutlingen, Germany, April 2011 (mastersthesis)

[BibTex]


no image
Robot Learning

Peters, J., Tedrake, R., Roy, N., Morimoto, J.

In Encyclopedia of Machine Learning, pages: 865-869, Encyclopedia of machine learning, (Editors: Sammut, C. and Webb, G. I.), Springer, New York, NY, USA, January 2011 (inbook)

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
What You Expect Is What You Get? Potential Use of Contingent Negative Variation for Passive BCI Systems in Gaze-Based HCI

Ihme, K., Zander, TO.

In Affective Computing and Intelligent Interaction, 6975, pages: 447-456, Lecture Notes in Computer Science, (Editors: D’Mello, S., Graesser, A., Schuller, B. and Martin, J.-C.), Springer, Berlin, Germany, 2011 (inbook)

Abstract
When using eye movements for cursor control in human-computer interaction (HCI), it may be difficult to find an appropriate substitute for the click operation. Most approaches make use of dwell times. However, in this context the so-called Midas-Touch-Problem occurs which means that the system wrongly interprets fixations due to long processing times or spontaneous dwellings of the user as command. Lately it has been shown that brain-computer interface (BCI) input bears good prospects to overcome this problem using imagined hand movements to elicit a selection. The current approach tries to develop this idea further by exploring potential signals for the use in a passive BCI, which would have the advantage that the brain signals used as input are generated automatically without conscious effort of the user. To explore event-related potentials (ERPs) giving information about the user’s intention to select an object, 32-channel electroencephalography (EEG) was recorded from ten participants interacting with a dwell-time-based system. Comparing ERP signals during the dwell time with those occurring during fixations on a neutral cross hair, a sustained negative slow cortical potential at central electrode sites was revealed. This negativity might be a contingent negative variation (CNV) reflecting the participants’ anticipation of the upcoming selection. Offline classification suggests that the CNV is detectable in single trial (mean accuracy 74.9 %). In future, research on the CNV should be accomplished to ensure its stable occurence in human-computer interaction and render possible its use as a potential substitue for the click operation.

DOI [BibTex]

DOI [BibTex]


no image
Kernel Methods in Bioinformatics

Borgwardt, KM.

In Handbook of Statistical Bioinformatics, pages: 317-334, Springer Handbooks of Computational Statistics ; 3, (Editors: Lu, H.H.-S., Schölkopf, B. and Zhao, H.), Springer, Berlin, Germany, 2011 (inbook)

Abstract
Kernel methods have now witnessed more than a decade of increasing popularity in the bioinformatics community. In this article, we will compactly review this development, examining the areas in which kernel methods have contributed to computational biology and describing the reasons for their success.

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Handbook of Statistical Bioinformatics

Lu, H., Schölkopf, B., Zhao, H.

pages: 627, Springer Handbooks of Computational Statistics, Springer, Berlin, Germany, 2011 (book)

Web DOI [BibTex]

Web DOI [BibTex]


no image
Cue Combination: Beyond Optimality

Rosas, P., Wichmann, F.

In Sensory Cue Integration, pages: 144-152, (Editors: Trommershäuser, J., Körding, K. and Landy, M. S.), Oxford University Press, 2011 (inbook)

[BibTex]

[BibTex]


no image
Model Learning in Robot Control

Nguyen-Tuong, D.

Albert-Ludwigs-Universität Freiburg, Germany, 2011 (phdthesis)

[BibTex]

[BibTex]

2007


no image
Some Theoretical Aspects of Human Categorization Behavior: Similarity and Generalization

Jäkel, F.

Biologische Kybernetik, Eberhard-Karls-Universität Tübingen, Tübingen, Germany, November 2007, passed with "ausgezeichnet", summa cum laude, published online (phdthesis)

PDF [BibTex]

2007

PDF [BibTex]


no image
Statistical Learning Theory Approaches to Clustering

Jegelka, S.

Biologische Kybernetik, Eberhard-Karls-Universität Tübingen, Tübingen, Germany, November 2007 (diplomathesis)

PDF [BibTex]

PDF [BibTex]


no image
Support Vector Machine Learning for Interdependent and Structured Output Spaces

Altun, Y., Hofmann, T., Tsochantaridis, I.

In Predicting Structured Data, pages: 85-104, Advances in neural information processing systems, (Editors: Bakir, G. H. , T. Hofmann, B. Schölkopf, A. J. Smola, B. Taskar, S. V. N. Vishwanathan), MIT Press, Cambridge, MA, USA, September 2007 (inbook)

Web [BibTex]

Web [BibTex]


no image
Brisk Kernel ICA

Jegelka, S., Gretton, A.

In Large Scale Kernel Machines, pages: 225-250, Neural Information Processing, (Editors: Bottou, L. , O. Chapelle, D. DeCoste, J. Weston), MIT Press, Cambridge, MA, USA, September 2007 (inbook)

Abstract
Recent approaches to independent component analysis have used kernel independence measures to obtain very good performance in ICA, particularly in areas where classical methods experience difficulty (for instance, sources with near-zero kurtosis). In this chapter, we compare two efficient extensions of these methods for large-scale problems: random subsampling of entries in the Gram matrices used in defining the independence measures, and incomplete Cholesky decomposition of these matrices. We derive closed-form, efficiently computable approximations for the gradients of these measures, and compare their performance on ICA using both artificial and music data. We show that kernel ICA can scale up to much larger problems than yet attempted, and that incomplete Cholesky decomposition performs better than random sampling.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Predicting Structured Data

Bakir, G., Hofmann, T., Schölkopf, B., Smola, A., Taskar, B., Vishwanathan, S.

pages: 360, Advances in neural information processing systems, MIT Press, Cambridge, MA, USA, September 2007 (book)

Abstract
Machine learning develops intelligent computer systems that are able to generalize from previously seen examples. A new domain of machine learning, in which the prediction must satisfy the additional constraints found in structured data, poses one of machine learning’s greatest challenges: learning functional dependencies between arbitrary input and output domains. This volume presents and analyzes the state of the art in machine learning algorithms and theory in this novel field. The contributors discuss applications as diverse as machine translation, document markup, computational biology, and information extraction, among others, providing a timely overview of an exciting field.

Web [BibTex]

Web [BibTex]


no image
Training a Support Vector Machine in the Primal

Chapelle, O.

In Large Scale Kernel Machines, pages: 29-50, Neural Information Processing, (Editors: Bottou, L. , O. Chapelle, D. DeCoste, J. Weston), MIT Press, Cambridge, MA, USA, September 2007, This is a slightly updated version of the Neural Computation paper (inbook)

Abstract
Most literature on Support Vector Machines (SVMs) concentrate on the dual optimization problem. In this paper, we would like to point out that the primal problem can also be solved efficiently, both for linear and non-linear SVMs, and that there is no reason to ignore this possibility. On the contrary, from the primal point of view new families of algorithms for large scale SVM training can be investigated.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Approximation Methods for Gaussian Process Regression

Quiñonero-Candela, J., Rasmussen, CE., Williams, CKI.

In Large-Scale Kernel Machines, pages: 203-223, Neural Information Processing, (Editors: Bottou, L. , O. Chapelle, D. DeCoste, J. Weston), MIT Press, Cambridge, MA, USA, September 2007 (inbook)

Abstract
A wealth of computationally efficient approximation methods for Gaussian process regression have been recently proposed. We give a unifying overview of sparse approximations, following Quiñonero-Candela and Rasmussen (2005), and a brief review of approximate matrix-vector multiplication methods.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Trading Convexity for Scalability

Collobert, R., Sinz, F., Weston, J., Bottou, L.

In Large Scale Kernel Machines, pages: 275-300, Neural Information Processing, (Editors: Bottou, L. , O. Chapelle, D. DeCoste, J. Weston), MIT Press, Cambridge, MA, USA, September 2007 (inbook)

Abstract
Convex learning algorithms, such as Support Vector Machines (SVMs), are often seen as highly desirable because they offer strong practical properties and are amenable to theoretical analysis. However, in this work we show how nonconvexity can provide scalability advantages over convexity. We show how concave-convex programming can be applied to produce (i) faster SVMs where training errors are no longer support vectors, and (ii) much faster Transductive SVMs.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Classifying Event-Related Desynchronization in EEG, ECoG and MEG signals

Hill, N., Lal, T., Tangermann, M., Hinterberger, T., Widman, G., Elger, C., Schölkopf, B., Birbaumer, N.

In Toward Brain-Computer Interfacing, pages: 235-260, Neural Information Processing, (Editors: G Dornhege and J del R Millán and T Hinterberger and DJ McFarland and K-R Müller), MIT Press, Cambridge, MA, USA, September 2007 (inbook)

PDF Web [BibTex]

PDF Web [BibTex]


no image
Joint Kernel Maps

Weston, J., Bakir, G., Bousquet, O., Mann, T., Noble, W., Schölkopf, B.

In Predicting Structured Data, pages: 67-84, Advances in neural information processing systems, (Editors: GH Bakir and T Hofmann and B Schölkopf and AJ Smola and B Taskar and SVN Vishwanathan), MIT Press, Cambridge, MA, USA, September 2007 (inbook)

Web [BibTex]

Web [BibTex]


no image
Brain-Computer Interfaces for Communication in Paralysis: A Clinical Experimental Approach

Hinterberger, T., Nijboer, F., Kübler, A., Matuz, T., Furdea, A., Mochty, U., Jordan, M., Lal, T., Hill, J., Mellinger, J., Bensch, M., Tangermann, M., Widman, G., Elger, C., Rosenstiel, W., Schölkopf, B., Birbaumer, N.

In Toward Brain-Computer Interfacing, pages: 43-64, Neural Information Processing, (Editors: G. Dornhege and J del R Millán and T Hinterberger and DJ McFarland and K-R Müller), MIT Press, Cambridge, MA, USA, September 2007 (inbook)

PDF Web [BibTex]

PDF Web [BibTex]


no image
Error Correcting Codes for the P300 Visual Speller

Biessmann, F.

Biologische Kybernetik, Eberhard-Karls-Universität Tübingen, Tübingen, Germany, July 2007 (diplomathesis)

Abstract
The aim of brain-computer interface (BCI) research is to establish a communication system based on intentional modulation of brain activity. This is accomplished by classifying patterns of brain ac- tivity, volitionally induced by the user. The BCI presented in this study is based on a classical paradigm as proposed by (Farwell and Donchin, 1988), the P300 visual speller. Recording electroencephalo- grams (EEG) from the scalp while presenting letters successively to the user, the speller can infer from the brain signal which letter the user was focussing on. Since EEG recordings are noisy, usually many repetitions are needed to detect the correct letter. The focus of this study was to improve the accuracy of the visual speller applying some basic principles from information theory: Stimulus sequences of the speller have been modified into error-correcting codes. Additionally a language model was incorporated into the probabilistic letter de- coder. Classification of single EEG epochs was less accurate using error correcting codes. However, the novel code could compensate for that such that overall, letter accuracies were as high as or even higher than for classical stimulus codes. In particular at high noise levels, error-correcting decoding achieved higher letter accuracies.

PDF [BibTex]

PDF [BibTex]


no image
Nonparametric Bayesian Discrete Latent Variable Models for Unsupervised Learning

Görür, D.

Biologische Kybernetik, Technische Universität Berlin, Berlin, Germany, April 2007, published online (phdthesis)

PDF PDF [BibTex]

PDF PDF [BibTex]


no image
Probabilistic Structure Calculation

Rieping, W., Habeck, M., Nilges, M.

In Structure and Biophysics: New Technologies for Current Challenges in Biology and Beyond, pages: 81-98, NATO Security through Science Series, (Editors: Puglisi, J. D.), Springer, Berlin, Germany, March 2007 (inbook)

Web DOI [BibTex]

Web DOI [BibTex]


no image
Applications of Kernel Machines to Structured Data

Eichhorn, J.

Biologische Kybernetik, Technische Universität Berlin, Berlin, Germany, March 2007, passed with "sehr gut", published online (phdthesis)

PDF [BibTex]

PDF [BibTex]


no image
A priori Knowledge from Non-Examples

Sinz, FH.

Biologische Kybernetik, Eberhard-Karls-Universität Tübingen, Tübingen, Germany, March 2007 (diplomathesis)

PDF Web [BibTex]

PDF Web [BibTex]


no image
Machine Learning for Mass Production and Industrial Engineering

Pfingsten, T.

Biologische Kybernetik, Eberhard-Karls-Universität Tübingen, Tübingen, Germany, February 2007 (phdthesis)

PDF [BibTex]

PDF [BibTex]


no image
On the Pre-Image Problem in Kernel Methods

BakIr, G., Schölkopf, B., Weston, J.

In Kernel Methods in Bioengineering, Signal and Image Processing, pages: 284-302, (Editors: G Camps-Valls and JL Rojo-Álvarez and M Martínez-Ramón), Idea Group Publishing, Hershey, PA, USA, January 2007 (inbook)

Abstract
In this chapter we are concerned with the problem of reconstructing patterns from their representation in feature space, known as the pre-image problem. We review existing algorithms and propose a learning based approach. All algorithms are discussed regarding their usability and complexity and evaluated on an image denoising application.

DOI [BibTex]

DOI [BibTex]


no image
Development of a Brain-Computer Interface Approach Based on Covert Attention to Tactile Stimuli

Raths, C.

University of Tübingen, Germany, University of Tübingen, Germany, January 2007 (diplomathesis)

[BibTex]

[BibTex]


no image
A Machine Learning Approach for Estimating the Attenuation Map for a Combined PET/MR Scanner

Hofmann, M.

Biologische Kybernetik, Max-Planck Institute for Biological Cybernetics, Tübingen, Germany, 2007 (diplomathesis)

[BibTex]

[BibTex]

2002


no image
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond

Schölkopf, B., Smola, A.

pages: 644, Adaptive Computation and Machine Learning, MIT Press, Cambridge, MA, USA, December 2002, Parts of this book, including an introduction to kernel methods, can be downloaded here. (book)

Abstract
In the 1990s, a new type of learning algorithm was developed, based on results from statistical learning theory: the Support Vector Machine (SVM). This gave rise to a new class of theoretically elegant learning machines that use a central concept of SVMs-kernels—for a number of learning tasks. Kernel machines provide a modular framework that can be adapted to different tasks and domains by the choice of the kernel function and the base algorithm. They are replacing neural networks in a variety of fields, including engineering, information retrieval, and bioinformatics. Learning with Kernels provides an introduction to SVMs and related kernel methods. Although the book begins with the basics, it also includes the latest research. It provides all of the concepts necessary to enable a reader equipped with some basic mathematical knowledge to enter the world of machine learning using theoretically well-founded yet easy-to-use kernel algorithms and to understand and apply the powerful algorithms that have been developed over the last few years.

Web [BibTex]

2002

Web [BibTex]

2001


no image
Variationsverfahren zur Untersuchung von Grundzustandseigenschaften des Ein-Band Hubbard-Modells

Eichhorn, J.

Biologische Kybernetik, Technische Universität Dresden, Dresden/Germany, May 2001 (diplomathesis)

Abstract
Using different modifications of a new variational approach, statical groundstate properties of the one-band Hubbard model such as energy and staggered magnetisation are calculated. By taking into account additional fluctuations, the method ist gradually improved so that a very good description of the energy in one and two dimensions can be achieved. After a detailed discussion of the application in one dimension, extensions for two dimensions are introduced. By use of a modified version of the variational ansatz in particular a description of the quantum phase transition for the magnetisation should be possible.

PostScript [BibTex]

2001

PostScript [BibTex]

2000


no image
Advances in Large Margin Classifiers

Smola, A., Bartlett, P., Schölkopf, B., Schuurmans, D.

pages: 422, Neural Information Processing, MIT Press, Cambridge, MA, USA, October 2000 (book)

Abstract
The concept of large margins is a unifying principle for the analysis of many different approaches to the classification of data from examples, including boosting, mathematical programming, neural networks, and support vector machines. The fact that it is the margin, or confidence level, of a classification--that is, a scale parameter--rather than a raw training error that matters has become a key tool for dealing with classifiers. This book shows how this idea applies to both the theoretical analysis and the design of algorithms. The book provides an overview of recent developments in large margin classifiers, examines connections with other methods (e.g., Bayesian inference), and identifies strengths and weaknesses of the method, as well as directions for future research. Among the contributors are Manfred Opper, Vladimir Vapnik, and Grace Wahba.

Web [BibTex]

2000

Web [BibTex]


no image
An Introduction to Kernel-Based Learning Algorithms

Müller, K., Mika, S., Rätsch, G., Tsuda, K., Schölkopf, B.

In Handbook of Neural Network Signal Processing, 4, (Editors: Yu Hen Hu and Jang-Neng Hwang), CRC Press, 2000 (inbook)

[BibTex]

[BibTex]