Header logo is ei


2014


no image
Single-Source Domain Adaptation with Target and Conditional Shift

Zhang, K., Schölkopf, B., Muandet, K., Wang, Z., Zhou, Z., Persello, C.

In Regularization, Optimization, Kernels, and Support Vector Machines, pages: 427-456, 19, Chapman & Hall/CRC Machine Learning & Pattern Recognition, (Editors: Suykens, J. A. K., Signoretto, M. and Argyriou, A.), Chapman and Hall/CRC, Boca Raton, USA, 2014 (inbook)

[BibTex]

2014

[BibTex]


no image
Higher-Order Tensors in Diffusion Imaging

Schultz, T., Fuster, A., Ghosh, A., Deriche, R., Florack, L., Lim, L.

In Visualization and Processing of Tensors and Higher Order Descriptors for Multi-Valued Data, pages: 129-161, Mathematics + Visualization, (Editors: Westin, C.-F., Vilanova, A. and Burgeth, B.), Springer, 2014 (inbook)

[BibTex]

[BibTex]


no image
Fuzzy Fibers: Uncertainty in dMRI Tractography

Schultz, T., Vilanova, A., Brecheisen, R., Kindlmann, G.

In Scientific Visualization: Uncertainty, Multifield, Biomedical, and Scalable Visualization, pages: 79-92, 8, Mathematics + Visualization, (Editors: Hansen, C. D., Chen, M., Johnson, C. R., Kaufman, A. E. and Hagen, H.), Springer, 2014 (inbook)

[BibTex]

[BibTex]


no image
Nonconvex Proximal Splitting with Computational Errors

Sra, S.

In Regularization, Optimization, Kernels, and Support Vector Machines, pages: 83-102, 4, (Editors: Suykens, J. A. K., Signoretto, M. and Argyriou, A.), CRC Press, 2014 (inbook)

[BibTex]

[BibTex]


no image
Active Learning - Modern Learning Theory

Balcan, M., Urner, R.

In Encyclopedia of Algorithms, (Editors: Kao, M.-Y.), Springer Berlin Heidelberg, 2014 (incollection)

link (url) DOI [BibTex]

link (url) DOI [BibTex]

2012


no image
Expectation-Maximization methods for solving (PO)MDPs and optimal control problems

Toussaint, M., Storkey, A., Harmeling, S.

In Inference and Learning in Dynamic Models, (Editors: Barber, D., Cemgil, A.T. and Chiappa, S.), Cambridge University Press, Cambridge, UK, January 2012 (inbook) In press

PDF [BibTex]

2012

PDF [BibTex]


no image
Active Learning Methods in Classification of Remote Sensing Images

Bruzzone, L., Persello, C., Demir, B.

In Signal and Image Processing for Remote Sensing, (Editors: CH Chen), CRC Press, Boca Raton, FL, USA, January 2012 (inbook) In press

[BibTex]

[BibTex]


no image
Inferential structure determination from NMR data

Habeck, M.

In Bayesian methods in structural bioinformatics, pages: 287-312, (Editors: Hamelryck, T., Mardia, K. V. and Ferkinghoff-Borg, J.), Springer, New York, 2012 (inbook)

[BibTex]

[BibTex]


no image
Robot Learning

Sigaud, O., Peters, J.

In Encyclopedia of the sciences of learning, (Editors: Seel, N.M.), Springer, Berlin, Germany, 2012 (inbook)

Web [BibTex]

Web [BibTex]


no image
Reinforcement Learning in Robotics: A Survey

Kober, J., Peters, J.

In Reinforcement Learning, 12, pages: 579-610, (Editors: Wiering, M. and Otterlo, M.), Springer, Berlin, Germany, 2012 (inbook)

Abstract
As most action generation problems of autonomous robots can be phrased in terms of sequential decision problems, robotics offers a tremendously important and interesting application platform for reinforcement learning. Similarly, the real-world challenges of this domain pose a major real-world check for reinforcement learning. Hence, the interplay between both disciplines can be seen as promising as the one between physics and mathematics. Nevertheless, only a fraction of the scientists working on reinforcement learning are sufficiently tied to robotics to oversee most problems encountered in this context. Thus, we will bring the most important challenges faced by robot reinforcement learning to their attention. To achieve this goal, we will attempt to survey most work that has successfully applied reinforcement learning to behavior generation for real robots. We discuss how the presented successful approaches have been made tractable despite the complexity of the domain and will study how representations or the inclusion of prior knowledge can make a significant difference. As a result, a particular focus of our chapter lies on the choice between model-based and model-free as well as between value function-based and policy search methods. As a result, we obtain a fairly complete survey of robot reinforcement learning which should allow a general reinforcement learning researcher to understand this domain.

Web DOI [BibTex]

Web DOI [BibTex]


no image
Higher-Order Tensors in Diffusion MRI

Schultz, T., Fuster, A., Ghosh, A., Deriche, R., Florack, L., Lim, L.

In Visualization and Processing of Tensors and Higher Order Descriptors for Multi-Valued Data, (Editors: Westin, C. F., Vilanova, A. and Burgeth, B.), Springer, 2012 (inbook) Accepted

[BibTex]

[BibTex]

2011


no image
Projected Newton-type methods in machine learning

Schmidt, M., Kim, D., Sra, S.

In Optimization for Machine Learning, pages: 305-330, (Editors: Sra, S., Nowozin, S. and Wright, S. J.), MIT Press, Cambridge, MA, USA, December 2011 (inbook)

Abstract
We consider projected Newton-type methods for solving large-scale optimization problems arising in machine learning and related fields. We first introduce an algorithmic framework for projected Newton-type methods by reviewing a canonical projected (quasi-)Newton method. This method, while conceptually pleasing, has a high computation cost per iteration. Thus, we discuss two variants that are more scalable, namely, two-metric projection and inexact projection methods. Finally, we show how to apply the Newton-type framework to handle non-smooth objectives. Examples are provided throughout the chapter to illustrate machine learning applications of our framework.

PDF Web [BibTex]

2011

PDF Web [BibTex]


no image
Statistical Learning Theory: Models, Concepts, and Results

von Luxburg, U., Schölkopf, B.

In Handbook of the History of Logic, Vol. 10: Inductive Logic, 10, pages: 651-706, (Editors: Gabbay, D. M., Hartmann, S. and Woods, J. H.), Elsevier North Holland, Amsterdam, Netherlands, May 2011 (inbook)

Abstract
Statistical learning theory provides the theoretical basis for many of today's machine learning algorithms and is arguably one of the most beautifully developed branches of artificial intelligence in general. It originated in Russia in the 1960s and gained wide popularity in the 1990s following the development of the so-called Support Vector Machine (SVM), which has become a standard tool for pattern recognition in a variety of domains ranging from computer vision to computational biology. Providing the basis of new learning algorithms, however, was not the only motivation for developing statistical learning theory. It was just as much a philosophical one, attempting to answer the question of what it is that allows us to draw valid conclusions from empirical data. In this article we attempt to give a gentle, non-technical overview over the key ideas and insights of statistical learning theory. We do not assume that the reader has a deep background in mathematics, statistics, or computer science. Given the nature of the subject matter, however, some familiarity with mathematical concepts and notations and some intuitive understanding of basic probability is required. There exist many excellent references to more technical surveys of the mathematics of statistical learning theory: the monographs by one of the founders of statistical learning theory ([Vapnik, 1995], [Vapnik, 1998]), a brief overview over statistical learning theory in Section 5 of [Sch{\"o}lkopf and Smola, 2002], more technical overview papers such as [Bousquet et al., 2003], [Mendelson, 2003], [Boucheron et al., 2005], [Herbrich and Williamson, 2002], and the monograph [Devroye et al., 1996].

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Robot Learning

Peters, J., Tedrake, R., Roy, N., Morimoto, J.

In Encyclopedia of Machine Learning, pages: 865-869, Encyclopedia of machine learning, (Editors: Sammut, C. and Webb, G. I.), Springer, New York, NY, USA, January 2011 (inbook)

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
What You Expect Is What You Get? Potential Use of Contingent Negative Variation for Passive BCI Systems in Gaze-Based HCI

Ihme, K., Zander, TO.

In Affective Computing and Intelligent Interaction, 6975, pages: 447-456, Lecture Notes in Computer Science, (Editors: D’Mello, S., Graesser, A., Schuller, B. and Martin, J.-C.), Springer, Berlin, Germany, 2011 (inbook)

Abstract
When using eye movements for cursor control in human-computer interaction (HCI), it may be difficult to find an appropriate substitute for the click operation. Most approaches make use of dwell times. However, in this context the so-called Midas-Touch-Problem occurs which means that the system wrongly interprets fixations due to long processing times or spontaneous dwellings of the user as command. Lately it has been shown that brain-computer interface (BCI) input bears good prospects to overcome this problem using imagined hand movements to elicit a selection. The current approach tries to develop this idea further by exploring potential signals for the use in a passive BCI, which would have the advantage that the brain signals used as input are generated automatically without conscious effort of the user. To explore event-related potentials (ERPs) giving information about the user’s intention to select an object, 32-channel electroencephalography (EEG) was recorded from ten participants interacting with a dwell-time-based system. Comparing ERP signals during the dwell time with those occurring during fixations on a neutral cross hair, a sustained negative slow cortical potential at central electrode sites was revealed. This negativity might be a contingent negative variation (CNV) reflecting the participants’ anticipation of the upcoming selection. Offline classification suggests that the CNV is detectable in single trial (mean accuracy 74.9 %). In future, research on the CNV should be accomplished to ensure its stable occurence in human-computer interaction and render possible its use as a potential substitue for the click operation.

DOI [BibTex]

DOI [BibTex]


no image
Kernel Methods in Bioinformatics

Borgwardt, KM.

In Handbook of Statistical Bioinformatics, pages: 317-334, Springer Handbooks of Computational Statistics ; 3, (Editors: Lu, H.H.-S., Schölkopf, B. and Zhao, H.), Springer, Berlin, Germany, 2011 (inbook)

Abstract
Kernel methods have now witnessed more than a decade of increasing popularity in the bioinformatics community. In this article, we will compactly review this development, examining the areas in which kernel methods have contributed to computational biology and describing the reasons for their success.

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Cue Combination: Beyond Optimality

Rosas, P., Wichmann, F.

In Sensory Cue Integration, pages: 144-152, (Editors: Trommershäuser, J., Körding, K. and Landy, M. S.), Oxford University Press, 2011 (inbook)

[BibTex]

[BibTex]

2007


no image
Support Vector Machine Learning for Interdependent and Structured Output Spaces

Altun, Y., Hofmann, T., Tsochantaridis, I.

In Predicting Structured Data, pages: 85-104, Advances in neural information processing systems, (Editors: Bakir, G. H. , T. Hofmann, B. Schölkopf, A. J. Smola, B. Taskar, S. V. N. Vishwanathan), MIT Press, Cambridge, MA, USA, September 2007 (inbook)

Web [BibTex]

2007

Web [BibTex]


no image
Brisk Kernel ICA

Jegelka, S., Gretton, A.

In Large Scale Kernel Machines, pages: 225-250, Neural Information Processing, (Editors: Bottou, L. , O. Chapelle, D. DeCoste, J. Weston), MIT Press, Cambridge, MA, USA, September 2007 (inbook)

Abstract
Recent approaches to independent component analysis have used kernel independence measures to obtain very good performance in ICA, particularly in areas where classical methods experience difficulty (for instance, sources with near-zero kurtosis). In this chapter, we compare two efficient extensions of these methods for large-scale problems: random subsampling of entries in the Gram matrices used in defining the independence measures, and incomplete Cholesky decomposition of these matrices. We derive closed-form, efficiently computable approximations for the gradients of these measures, and compare their performance on ICA using both artificial and music data. We show that kernel ICA can scale up to much larger problems than yet attempted, and that incomplete Cholesky decomposition performs better than random sampling.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Training a Support Vector Machine in the Primal

Chapelle, O.

In Large Scale Kernel Machines, pages: 29-50, Neural Information Processing, (Editors: Bottou, L. , O. Chapelle, D. DeCoste, J. Weston), MIT Press, Cambridge, MA, USA, September 2007, This is a slightly updated version of the Neural Computation paper (inbook)

Abstract
Most literature on Support Vector Machines (SVMs) concentrate on the dual optimization problem. In this paper, we would like to point out that the primal problem can also be solved efficiently, both for linear and non-linear SVMs, and that there is no reason to ignore this possibility. On the contrary, from the primal point of view new families of algorithms for large scale SVM training can be investigated.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Approximation Methods for Gaussian Process Regression

Quiñonero-Candela, J., Rasmussen, CE., Williams, CKI.

In Large-Scale Kernel Machines, pages: 203-223, Neural Information Processing, (Editors: Bottou, L. , O. Chapelle, D. DeCoste, J. Weston), MIT Press, Cambridge, MA, USA, September 2007 (inbook)

Abstract
A wealth of computationally efficient approximation methods for Gaussian process regression have been recently proposed. We give a unifying overview of sparse approximations, following Quiñonero-Candela and Rasmussen (2005), and a brief review of approximate matrix-vector multiplication methods.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Trading Convexity for Scalability

Collobert, R., Sinz, F., Weston, J., Bottou, L.

In Large Scale Kernel Machines, pages: 275-300, Neural Information Processing, (Editors: Bottou, L. , O. Chapelle, D. DeCoste, J. Weston), MIT Press, Cambridge, MA, USA, September 2007 (inbook)

Abstract
Convex learning algorithms, such as Support Vector Machines (SVMs), are often seen as highly desirable because they offer strong practical properties and are amenable to theoretical analysis. However, in this work we show how nonconvexity can provide scalability advantages over convexity. We show how concave-convex programming can be applied to produce (i) faster SVMs where training errors are no longer support vectors, and (ii) much faster Transductive SVMs.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Density Estimation of Structured Outputs in Reproducing Kernel Hilbert Spaces

Altun, Y., Smola, A.

In Predicting Structured Data, pages: 283-300, Advances in neural information processing systems, (Editors: BakIr, G. H., T. Hofmann, B. Schölkopf, A. J. Smola, B. Taskar, S. V.N. Vishwanathan), MIT Press, Cambridge, MA, USA, September 2007 (inbook)

Abstract
In this paper we study the problem of estimating conditional probability distributions for structured output prediction tasks in Reproducing Kernel Hilbert Spaces. More specically, we prove decomposition results for undirected graphical models, give constructions for kernels, and show connections to Gaussian Process classi- cation. Finally we present ecient means of solving the optimization problem and apply this to label sequence learning. Experiments on named entity recognition and pitch accent prediction tasks demonstrate the competitiveness of our approach.

Web [BibTex]

Web [BibTex]


no image
Classifying Event-Related Desynchronization in EEG, ECoG and MEG signals

Hill, N., Lal, T., Tangermann, M., Hinterberger, T., Widman, G., Elger, C., Schölkopf, B., Birbaumer, N.

In Toward Brain-Computer Interfacing, pages: 235-260, Neural Information Processing, (Editors: G Dornhege and J del R Millán and T Hinterberger and DJ McFarland and K-R Müller), MIT Press, Cambridge, MA, USA, September 2007 (inbook)

PDF Web [BibTex]

PDF Web [BibTex]


no image
Joint Kernel Maps

Weston, J., Bakir, G., Bousquet, O., Mann, T., Noble, W., Schölkopf, B.

In Predicting Structured Data, pages: 67-84, Advances in neural information processing systems, (Editors: GH Bakir and T Hofmann and B Schölkopf and AJ Smola and B Taskar and SVN Vishwanathan), MIT Press, Cambridge, MA, USA, September 2007 (inbook)

Web [BibTex]

Web [BibTex]


no image
Brain-Computer Interfaces for Communication in Paralysis: A Clinical Experimental Approach

Hinterberger, T., Nijboer, F., Kübler, A., Matuz, T., Furdea, A., Mochty, U., Jordan, M., Lal, T., Hill, J., Mellinger, J., Bensch, M., Tangermann, M., Widman, G., Elger, C., Rosenstiel, W., Schölkopf, B., Birbaumer, N.

In Toward Brain-Computer Interfacing, pages: 43-64, Neural Information Processing, (Editors: G. Dornhege and J del R Millán and T Hinterberger and DJ McFarland and K-R Müller), MIT Press, Cambridge, MA, USA, September 2007 (inbook)

PDF Web [BibTex]

PDF Web [BibTex]


no image
Probabilistic Structure Calculation

Rieping, W., Habeck, M., Nilges, M.

In Structure and Biophysics: New Technologies for Current Challenges in Biology and Beyond, pages: 81-98, NATO Security through Science Series, (Editors: Puglisi, J. D.), Springer, Berlin, Germany, March 2007 (inbook)

Web DOI [BibTex]

Web DOI [BibTex]


no image
On the Pre-Image Problem in Kernel Methods

BakIr, G., Schölkopf, B., Weston, J.

In Kernel Methods in Bioengineering, Signal and Image Processing, pages: 284-302, (Editors: G Camps-Valls and JL Rojo-Álvarez and M Martínez-Ramón), Idea Group Publishing, Hershey, PA, USA, January 2007 (inbook)

Abstract
In this chapter we are concerned with the problem of reconstructing patterns from their representation in feature space, known as the pre-image problem. We review existing algorithms and propose a learning based approach. All algorithms are discussed regarding their usability and complexity and evaluated on an image denoising application.

DOI [BibTex]

DOI [BibTex]


no image
Some comments on ν-SVM

Dinuzzo, F., De Nicolao, G.

In A tribute to Antonio Lepschy, pages: -, (Editors: Picci, G. , M. E. Valcher), Edizione Libreria Progetto, Padova, Italy, 2007 (inbook)

[BibTex]

[BibTex]

2004


no image
Analysis of differential gene expression in healthy and osteoarthritic cartilage and isolated chondrocytes by microarray analysis

Aigner, T., Saas, J., Zien, A., Zimmer, R., Gebhard, P., Knorr, T.

In Volume 1: Cellular and Molecular Tools, pages: 109-128, (Editors: Sabatini, M., P. Pastoureau and F. De Ceuninck), Humana Press, July 2004 (inbook)

Abstract
The regulation of chondrocytes in osteoarthritic cartilage and the expression of specific gene products by these cells during early-onset and late-stage osteoarthritis are not well characterized. With the introduction of cDNA array technology, the measurement of thousands of different genes in one small tissue sample can be carried out. Interpretation of gene expression analyses in articular cartilage is aided by the fact that this tissue contains only one cell type in both normal and diseased conditions. However, care has to be taken not to over- and misinterpret results, and some major challenges must be overcome in order to utilize the potential of this technology properly in the field of osteoarthritis.

Web [BibTex]

2004

Web [BibTex]


no image
Distributed Command Execution

Stark, S., Berlin, M.

In BSD Hacks: 100 industrial-strength tips & tools, pages: 152-152, (Editors: Lavigne, Dru), O’Reilly, Beijing, May 2004 (inbook)

Abstract
Often you want to execute a command not only on one computer, but on several at once. For example, you might want to report the current statistics on a group of managed servers or update all of your web servers at once.

[BibTex]

[BibTex]


no image
Gaussian Processes in Machine Learning

Rasmussen, CE.

In 3176, pages: 63-71, Lecture Notes in Computer Science, (Editors: Bousquet, O., U. von Luxburg and G. Rätsch), Springer, Heidelberg, 2004, Copyright by Springer (inbook)

Abstract
We give a basic introduction to Gaussian Process regression models. We focus on understanding the role of the stochastic process and how it is used to define a distribution over functions. We present the simple equations for incorporating training data and examine how to learn the hyperparameters using the marginal likelihood. We explain the practical advantages of Gaussian Process and end with conclusions and a look at the current trends in GP work.

PDF PostScript [BibTex]

PDF PostScript [BibTex]


no image
Local Alignment Kernels for Biological Sequences

Vert, J., Saigo, H., Akutsu, T.

In Kernel Methods in Computational Biology, pages: 131-153, MIT Press, Cambridge, MA,, 2004 (inbook)

Web [BibTex]

Web [BibTex]


no image
Protein Classification via Kernel Matrix Completion

Kin, T., Kato, T., Tsuda, K.

In pages: 261-274, (Editors: Schoelkopf, B., K. Tsuda and J.P. Vert), MIT Press, Cambridge, MA; USA, 2004 (inbook)

PDF [BibTex]

PDF [BibTex]


no image
Introduction to Statistical Learning Theory

Bousquet, O., Boucheron, S., Lugosi, G.

In Lecture Notes in Artificial Intelligence 3176, pages: 169-207, (Editors: Bousquet, O., U. von Luxburg and G. Rätsch), Springer, Heidelberg, Germany, 2004 (inbook)

PDF [BibTex]

PDF [BibTex]


no image
A Primer on Kernel Methods

Vert, J., Tsuda, K., Schölkopf, B.

In Kernel Methods in Computational Biology, pages: 35-70, (Editors: B Schölkopf and K Tsuda and JP Vert), MIT Press, Cambridge, MA, USA, 2004 (inbook)

PDF [BibTex]

PDF [BibTex]


no image
Concentration Inequalities

Boucheron, S., Lugosi, G., Bousquet, O.

In Lecture Notes in Artificial Intelligence 3176, pages: 208-240, (Editors: Bousquet, O., U. von Luxburg and G. Rätsch), Springer, Heidelberg, Germany, 2004 (inbook)

PDF [BibTex]

PDF [BibTex]


no image
Kernels for graphs

Kashima, H., Tsuda, K., Inokuchi, A.

In pages: 155-170, (Editors: Schoelkopf, B., K. Tsuda and J.P. Vert), MIT Press, Cambridge, MA; USA, 2004 (inbook)

PDF [BibTex]

PDF [BibTex]


no image
A primer on molecular biology

Zien, A.

In pages: 3-34, (Editors: Schoelkopf, B., K. Tsuda and J. P. Vert), MIT Press, Cambridge, MA, USA, 2004 (inbook)

Abstract
Modern molecular biology provides a rich source of challenging machine learning problems. This tutorial chapter aims to provide the necessary biological background knowledge required to communicate with biologists and to understand and properly formalize a number of most interesting problems in this application domain. The largest part of the chapter (its first section) is devoted to the cell as the basic unit of life. Four aspects of cells are reviewed in sequence: (1) the molecules that cells make use of (above all, proteins, RNA, and DNA); (2) the spatial organization of cells (``compartmentalization''); (3) the way cells produce proteins (``protein expression''); and (4) cellular communication and evolution (of cells and organisms). In the second section, an overview is provided of the most frequent measurement technologies, data types, and data sources. Finally, important open problems in the analysis of these data (bioinformatics challenges) are briefly outlined.

PDF PostScript Web [BibTex]

PDF PostScript Web [BibTex]