Header logo is ei


2013


no image
A Review of Performance Variations in SMR-Based Brain–Computer Interfaces (BCIs)

Grosse-Wentrup, M., Schölkopf, B.

In Brain-Computer Interface Research, pages: 39-51, 4, SpringerBriefs in Electrical and Computer Engineering, (Editors: Guger, C., Allison, B. Z. and Edlinger, G.), Springer, 2013 (inbook)

PDF DOI [BibTex]

2013

PDF DOI [BibTex]


no image
Semi-supervised learning in causal and anticausal settings

Schölkopf, B., Janzing, D., Peters, J., Sgouritsa, E., Zhang, K., Mooij, J.

In Empirical Inference, pages: 129-141, 13, Festschrift in Honor of Vladimir Vapnik, (Editors: Schölkopf, B., Luo, Z. and Vovk, V.), Springer, 2013 (inbook)

DOI [BibTex]

DOI [BibTex]


no image
Tractable large-scale optimization in machine learning

Sra, S.

In Tractability: Practical Approaches to Hard Problems, pages: 202-230, 7, (Editors: Bordeaux, L., Hamadi , Y., Kohli, P. and Mateescu, R. ), Cambridge University Press , 2013 (inbook)

[BibTex]

[BibTex]


no image
On the Relations and Differences between Popper Dimension, Exclusion Dimension and VC-Dimension

Seldin, Y., Schölkopf, B.

In Empirical Inference - Festschrift in Honor of Vladimir N. Vapnik, pages: 53-57, 6, (Editors: Schölkopf, B., Luo, Z. and Vovk, V.), Springer, 2013 (inbook)

[BibTex]

[BibTex]

2006


no image
Prediction of Protein Function from Networks

Shin, H., Tsuda, K.

In Semi-Supervised Learning, pages: 361-376, Adaptive Computation and Machine Learning, (Editors: Chapelle, O. , B. Schölkopf, A. Zien), MIT Press, Cambridge, MA, USA, November 2006 (inbook)

Abstract
In computational biology, it is common to represent domain knowledge using graphs. Frequently there exist multiple graphs for the same set of nodes, representing information from different sources, and no single graph is sufficient to predict class labels of unlabelled nodes reliably. One way to enhance reliability is to integrate multiple graphs, since individual graphs are partly independent and partly complementary to each other for prediction. In this chapter, we describe an algorithm to assign weights to multiple graphs within graph-based semi-supervised learning. Both predicting class labels and searching for weights for combining multiple graphs are formulated into one convex optimization problem. The graph-combining method is applied to functional class prediction of yeast proteins.When compared with individual graphs, the combined graph with optimized weights performs significantly better than any single graph.When compared with the semidefinite programming-based support vector machine (SDP/SVM), it shows comparable accuracy in a remarkably short time. Compared with a combined graph with equal-valued weights, our method could select important graphs without loss of accuracy, which implies the desirable property of integration with selectivity.

Web [BibTex]

2006

Web [BibTex]


no image
Discrete Regularization

Zhou, D., Schölkopf, B.

In Semi-supervised Learning, pages: 237-250, Adaptive computation and machine learning, (Editors: O Chapelle and B Schölkopf and A Zien), MIT Press, Cambridge, MA, USA, November 2006 (inbook)

Abstract
Many real-world machine learning problems are situated on finite discrete sets, including dimensionality reduction, clustering, and transductive inference. A variety of approaches for learning from finite sets has been proposed from different motivations and for different problems. In most of those approaches, a finite set is modeled as a graph, in which the edges encode pairwise relationships among the objects in the set. Consequently many concepts and methods from graph theory are adopted. In particular, the graph Laplacian is widely used. In this chapter we present a systemic framework for learning from a finite set represented as a graph. We develop discrete analogues of a number of differential operators, and then construct a discrete analogue of classical regularization theory based on those discrete differential operators. The graph Laplacian based approaches are special cases of this general discrete regularization framework. An important thing implied in this framework is that we have a wide choices of regularization on graph in addition to the widely-used graph Laplacian based one.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Semi-Supervised Learning

Chapelle, O., Schölkopf, B., Zien, A.

pages: 508, Adaptive computation and machine learning, MIT Press, Cambridge, MA, USA, September 2006 (book)

Abstract
In the field of machine learning, semi-supervised learning (SSL) occupies the middle ground, between supervised learning (in which all training examples are labeled) and unsupervised learning (in which no label data are given). Interest in SSL has increased in recent years, particularly because of application domains in which unlabeled data are plentiful, such as images, text, and bioinformatics. This first comprehensive overview of SSL presents state-of-the-art algorithms, a taxonomy of the field, selected applications, benchmark experiments, and perspectives on ongoing and future research. Semi-Supervised Learning first presents the key assumptions and ideas underlying the field: smoothness, cluster or low-density separation, manifold structure, and transduction. The core of the book is the presentation of SSL methods, organized according to algorithmic strategies. After an examination of generative models, the book describes algorithms that implement the low-density separation assumption, graph-based methods, and algorithms that perform two-step learning. The book then discusses SSL applications and offers guidelines for SSL practitioners by analyzing the results of extensive benchmark experiments. Finally, the book looks at interesting directions for SSL research. The book closes with a discussion of the relationship between semi-supervised learning and transduction.

Web [BibTex]

Web [BibTex]


no image
Gaussian Processes for Machine Learning

Rasmussen, CE., Williams, CKI.

pages: 248, Adaptive Computation and Machine Learning, MIT Press, Cambridge, MA, USA, January 2006 (book)

Abstract
Gaussian processes (GPs) provide a principled, practical, probabilistic approach to learning in kernel machines. GPs have received increased attention in the machine-learning community over the past decade, and this book provides a long-needed systematic and unified treatment of theoretical and practical aspects of GPs in machine learning. The treatment is comprehensive and self-contained, targeted at researchers and students in machine learning and applied statistics. The book deals with the supervised-learning problem for both regression and classification, and includes detailed algorithms. A wide variety of covariance (kernel) functions are presented and their properties discussed. Model selection is discussed both from a Bayesian and a classical perspective. Many connections to other well-known techniques from machine learning and statistics are discussed, including support-vector machines, neural networks, splines, regularization networks, relevance vector machines and others. Theoretical issues including learning curves and the PAC-Bayesian framework are treated, and several approximation methods for learning with large datasets are discussed. The book contains illustrative examples and exercises, and code and datasets are available on the Web. Appendixes provide mathematical background and a discussion of Gaussian Markov processes.

Web [BibTex]

Web [BibTex]


no image
Combining a Filter Method with SVMs

Lal, T., Chapelle, O., Schölkopf, B.

In Feature Extraction: Foundations and Applications, Studies in Fuzziness and Soft Computing, Vol. 207, pages: 439-446, Studies in Fuzziness and Soft Computing ; 207, (Editors: I Guyon and M Nikravesh and S Gunn and LA Zadeh), Springer, Berlin, Germany, 2006 (inbook)

Abstract
Our goal for the competition (feature selection competition NIPS 2003) was to evaluate the usefulness of simple machine learning techniques. We decided to use the correlation criteria as a feature selection method and Support Vector Machines for the classification part. Here we explain how we chose the regularization parameter C of the SVM, how we determined the kernel parameter and how we estimated the number of features used for each data set. All analyzes were carried out on the training sets of the competition data. We choose the data set Arcene as an example to explain the approach step by step. In our view the point of this competition was the construction of a well performing classifier rather than the systematic analysis of a specific approach. This is why our search for the best classifier was only guided by the described methods and that we deviated from the road map at several occasions. All calculations were done with the software Spider [2004].

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Embedded methods

Lal, T., Chapelle, O., Weston, J., Elisseeff, A.

In Feature Extraction: Foundations and Applications, pages: 137-165, Studies in Fuzziness and Soft Computing ; 207, (Editors: Guyon, I. , S. Gunn, M. Nikravesh, L. A. Zadeh), Springer, Berlin, Germany, 2006 (inbook)

Abstract
Embedded methods are a relatively new approach to feature selection. Unlike filter methods, which do not incorporate learning, and wrapper approaches, which can be used with arbitrary classifiers, in embedded methods the features selection part can not be separated from the learning part. Existing embedded methods are reviewed based on a unifying mathematical framework.

PDF Web [BibTex]

PDF Web [BibTex]

2003


no image
Support Vector Machines

Schölkopf, B., Smola, A.

In Handbook of Brain Theory and Neural Networks (2nd edition), pages: 1119-1125, (Editors: MA Arbib), MIT Press, Cambridge, MA, USA, 2003 (inbook)

[BibTex]

2003

[BibTex]


no image
Extension of the nu-SVM range for classification

Perez-Cruz, F., Weston, J., Herrmann, D., Schölkopf, B.

In Advances in Learning Theory: Methods, Models and Applications, NATO Science Series III: Computer and Systems Sciences, Vol. 190, 190, pages: 179-196, NATO Science Series III: Computer and Systems Sciences, (Editors: J Suykens and G Horvath and S Basu and C Micchelli and J Vandewalle), IOS Press, Amsterdam, 2003 (inbook)

[BibTex]

[BibTex]


no image
An Introduction to Support Vector Machines

Schölkopf, B.

In Recent Advances and Trends in Nonparametric Statistics , pages: 3-17, (Editors: MG Akritas and DN Politis), Elsevier, Amsterdam, The Netherlands, 2003 (inbook)

Web DOI [BibTex]

Web DOI [BibTex]


no image
Statistical Learning and Kernel Methods in Bioinformatics

Schölkopf, B., Guyon, I., Weston, J.

In Artificial Intelligence and Heuristic Methods in Bioinformatics, 183, pages: 1-21, 3, (Editors: P Frasconi und R Shamir), IOS Press, Amsterdam, The Netherlands, 2003 (inbook)

[BibTex]

[BibTex]


no image
Statistical Learning and Kernel Methods

Navia-Vázquez, A., Schölkopf, B.

In Adaptivity and Learning—An Interdisciplinary Debate, pages: 161-186, (Editors: R.Kühn and R Menzel and W Menzel and U Ratsch and MM Richter and I-O Stamatescu), Springer, Berlin, Heidelberg, Germany, 2003 (inbook)

[BibTex]

[BibTex]


no image
A Short Introduction to Learning with Kernels

Schölkopf, B., Smola, A.

In Proceedings of the Machine Learning Summer School, Lecture Notes in Artificial Intelligence, Vol. 2600, pages: 41-64, LNAI 2600, (Editors: S Mendelson and AJ Smola), Springer, Berlin, Heidelberg, Germany, 2003 (inbook)

[BibTex]

[BibTex]


no image
Bayesian Kernel Methods

Smola, A., Schölkopf, B.

In Advanced Lectures on Machine Learning, Machine Learning Summer School 2002, Lecture Notes in Computer Science, Vol. 2600, LNAI 2600, pages: 65-117, 0, (Editors: S Mendelson and AJ Smola), Springer, Berlin, Germany, 2003 (inbook)

DOI [BibTex]

DOI [BibTex]


no image
Stability of ensembles of kernel machines

Elisseeff, A., Pontil, M.

In 190, pages: 111-124, NATO Science Series III: Computer and Systems Science, (Editors: Suykens, J., G. Horvath, S. Basu, C. Micchelli and J. Vandewalle), IOS press, Netherlands, 2003 (inbook)

[BibTex]

[BibTex]

2002


no image
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond

Schölkopf, B., Smola, A.

pages: 644, Adaptive Computation and Machine Learning, MIT Press, Cambridge, MA, USA, December 2002, Parts of this book, including an introduction to kernel methods, can be downloaded here. (book)

Abstract
In the 1990s, a new type of learning algorithm was developed, based on results from statistical learning theory: the Support Vector Machine (SVM). This gave rise to a new class of theoretically elegant learning machines that use a central concept of SVMs-kernels—for a number of learning tasks. Kernel machines provide a modular framework that can be adapted to different tasks and domains by the choice of the kernel function and the base algorithm. They are replacing neural networks in a variety of fields, including engineering, information retrieval, and bioinformatics. Learning with Kernels provides an introduction to SVMs and related kernel methods. Although the book begins with the basics, it also includes the latest research. It provides all of the concepts necessary to enable a reader equipped with some basic mathematical knowledge to enter the world of machine learning using theoretically well-founded yet easy-to-use kernel algorithms and to understand and apply the powerful algorithms that have been developed over the last few years.

Web [BibTex]

2002

Web [BibTex]

2000


no image
Robust ensemble learning

Rätsch, G., Schölkopf, B., Smola, A., Mika, S., Onoda, T., Müller, K.

In Advances in Large Margin Classifiers, pages: 207-220, Neural Information Processing Series, (Editors: AJ Smola and PJ Bartlett and B Schölkopf and D. Schuurmans), MIT Press, Cambridge, MA, USA, October 2000 (inbook)

[BibTex]

2000

[BibTex]


no image
Entropy numbers for convex combinations and MLPs

Smola, A., Elisseeff, A., Schölkopf, B., Williamson, R.

In Advances in Large Margin Classifiers, pages: 369-387, Neural Information Processing Series, (Editors: AJ Smola and PL Bartlett and B Schölkopf and D Schuurmans), MIT Press, Cambridge, MA,, October 2000 (inbook)

[BibTex]

[BibTex]


no image
Natural Regularization from Generative Models

Oliver, N., Schölkopf, B., Smola, A.

In Advances in Large Margin Classifiers, pages: 51-60, Neural Information Processing Series, (Editors: AJ Smola and PJ Bartlett and B Schölkopf and D Schuurmans), MIT Press, Cambridge, MA, USA, October 2000 (inbook)

[BibTex]

[BibTex]


no image
Advances in Large Margin Classifiers

Smola, A., Bartlett, P., Schölkopf, B., Schuurmans, D.

pages: 422, Neural Information Processing, MIT Press, Cambridge, MA, USA, October 2000 (book)

Abstract
The concept of large margins is a unifying principle for the analysis of many different approaches to the classification of data from examples, including boosting, mathematical programming, neural networks, and support vector machines. The fact that it is the margin, or confidence level, of a classification--that is, a scale parameter--rather than a raw training error that matters has become a key tool for dealing with classifiers. This book shows how this idea applies to both the theoretical analysis and the design of algorithms. The book provides an overview of recent developments in large margin classifiers, examines connections with other methods (e.g., Bayesian inference), and identifies strengths and weaknesses of the method, as well as directions for future research. Among the contributors are Manfred Opper, Vladimir Vapnik, and Grace Wahba.

Web [BibTex]

Web [BibTex]


no image
Solving Satisfiability Problems with Genetic Algorithms

Harmeling, S.

In Genetic Algorithms and Genetic Programming at Stanford 2000, pages: 206-213, (Editors: Koza, J. R.), Stanford Bookstore, Stanford, CA, USA, June 2000 (inbook)

Abstract
We show how to solve hard 3-SAT problems using genetic algorithms. Furthermore, we explore other genetic operators that may be useful to tackle 3-SAT problems, and discuss their pros and cons.

PDF [BibTex]

PDF [BibTex]


no image
Statistical Learning and Kernel Methods

Schölkopf, B.

In CISM Courses and Lectures, International Centre for Mechanical Sciences Vol.431, CISM Courses and Lectures, International Centre for Mechanical Sciences, 431(23):3-24, (Editors: G Della Riccia and H-J Lenz and R Kruse), Springer, Vienna, Data Fusion and Perception, 2000 (inbook)

[BibTex]

[BibTex]


no image
An Introduction to Kernel-Based Learning Algorithms

Müller, K., Mika, S., Rätsch, G., Tsuda, K., Schölkopf, B.

In Handbook of Neural Network Signal Processing, 4, (Editors: Yu Hen Hu and Jang-Neng Hwang), CRC Press, 2000 (inbook)

[BibTex]

[BibTex]