Header logo is ei


2019


no image
Convolutional neural networks: A magic bullet for gravitational-wave detection?

Gebhard, T., Kilbertus, N., Harry, I., Schölkopf, B.

Physical Review D, 100(6):063015, American Physical Society, September 2019 (article)

link (url) DOI [BibTex]

2019

link (url) DOI [BibTex]


no image
Data scarcity, robustness and extreme multi-label classification

Babbar, R., Schölkopf, B.

Machine Learning, 108(8):1329-1351, September 2019, Special Issue of the ECML PKDD 2019 Journal Track (article)

DOI [BibTex]

DOI [BibTex]


no image
Learning Transferable Representations

Rojas-Carulla, M.

University of Cambridge, UK, 2019 (phdthesis)

[BibTex]

[BibTex]


no image
Sample-efficient deep reinforcement learning for continuous control

Gu, S.

University of Cambridge, UK, 2019 (phdthesis)

[BibTex]


no image
A 32-channel multi-coil setup optimized for human brain shimming at 9.4T

Aghaeifar, A., Zhou, J., Heule, R., Tabibian, B., Schölkopf, B., Jia, F., Zaitsev, M., Scheffler, K.

Magnetic Resonance in Medicine, 2019, (Early View) (article)

DOI [BibTex]

DOI [BibTex]


no image
Enhancing Human Learning via Spaced Repetition Optimization

Tabibian, B., Upadhyay, U., De, A., Zarezade, A., Schölkopf, B., Gomez Rodriguez, M.

Proceedings of the National Academy of Sciences, 2019, PNAS published ahead of print January 22, 2019 (article)

DOI Project Page Project Page [BibTex]

DOI Project Page Project Page [BibTex]


no image
Spatial Filtering based on Riemannian Manifold for Brain-Computer Interfacing

Xu, J.

Technical University of Munich, Germany, 2019 (mastersthesis)

[BibTex]

[BibTex]


Thumb xl screenshot 2019 03 25 at 14.29.22
Learning to Control Highly Accelerated Ballistic Movements on Muscular Robots

Büchler, D., Calandra, R., Peters, J.

2019 (article) Submitted

Abstract
High-speed and high-acceleration movements are inherently hard to control. Applying learning to the control of such motions on anthropomorphic robot arms can improve the accuracy of the control but might damage the system. The inherent exploration of learning approaches can lead to instabilities and the robot reaching joint limits at high speeds. Having hardware that enables safe exploration of high-speed and high-acceleration movements is therefore desirable. To address this issue, we propose to use robots actuated by Pneumatic Artificial Muscles (PAMs). In this paper, we present a four degrees of freedom (DoFs) robot arm that reaches high joint angle accelerations of up to 28000 °/s^2 while avoiding dangerous joint limits thanks to the antagonistic actuation and limits on the air pressure ranges. With this robot arm, we are able to tune control parameters using Bayesian optimization directly on the hardware without additional safety considerations. The achieved tracking performance on a fast trajectory exceeds previous results on comparable PAM-driven robots. We also show that our system can be controlled well on slow trajectories with PID controllers due to careful construction considerations such as minimal bending of cables, lightweight kinematics and minimal contact between PAMs and PAMs with the links. Finally, we propose a novel technique to control the the co-contraction of antagonistic muscle pairs. Experimental results illustrate that choosing the optimal co-contraction level is vital to reach better tracking performance. Through the use of PAM-driven robots and learning, we do a small step towards the future development of robots capable of more human-like motions.

Arxiv Video [BibTex]


no image
Inferring causation from time series with perspectives in Earth system sciences

Runge, J., Bathiany, S., Bollt, E., Camps-Valls, G., Coumou, D., Deyle, E., Glymour, C., Kretschmer, M., Mahecha, M., van Nes, E., Peters, J., Quax, R., Reichstein, M., Scheffer, M. S. B., Spirtes, P., Sugihara, G., Sun, J., Zhang, K., Zscheischler, J.

Nature Communications, 2019 (article) In revision

[BibTex]

[BibTex]


no image
Eigendecompositions of Transfer Operators in Reproducing Kernel Hilbert Spaces

Klus, S., Schuster, I., Muandet, K.

Journal of Nonlinear Science, 2019, First Online: 21 August 2019 (article)

DOI [BibTex]

DOI [BibTex]

2002


no image
Optimized Support Vector Machines for Nonstationary Signal Classification

Davy, M., Gretton, A., Doucet, A., Rayner, P.

IEEE Signal Processing Letters, 9(12):442-445, December 2002 (article)

Abstract
This letter describes an efficient method to perform nonstationary signal classification. A support vector machine (SVM) algorithm is introduced and its parameters optimised in a principled way. Simulations demonstrate that our low complexity method outperforms state-of-the-art nonstationary signal classification techniques.

PostScript Web DOI [BibTex]

2002

PostScript Web DOI [BibTex]


no image
A New Discriminative Kernel from Probabilistic Models

Tsuda, K., Kawanabe, M., Rätsch, G., Sonnenburg, S., Müller, K.

Neural Computation, 14(10):2397-2414, October 2002 (article)

PDF [BibTex]

PDF [BibTex]


no image
Functional Genomics of Osteoarthritis

Aigner, T., Bartnik, E., Zien, A., Zimmer, R.

Pharmacogenomics, 3(5):635-650, September 2002 (article)

Web [BibTex]

Web [BibTex]


no image
Constructing Boosting algorithms from SVMs: an application to one-class classification.

Rätsch, G., Mika, S., Schölkopf, B., Müller, K.

IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(9):1184-1199, September 2002 (article)

Abstract
We show via an equivalence of mathematical programs that a support vector (SV) algorithm can be translated into an equivalent boosting-like algorithm and vice versa. We exemplify this translation procedure for a new algorithm—one-class leveraging—starting from the one-class support vector machine (1-SVM). This is a first step toward unsupervised learning in a boosting framework. Building on so-called barrier methods known from the theory of constrained optimization, it returns a function, written as a convex combination of base hypotheses, that characterizes whether a given test point is likely to have been generated from the distribution underlying the training data. Simulations on one-class classification problems demonstrate the usefulness of our approach.

DOI [BibTex]

DOI [BibTex]


no image
Co-Clustering of Biological Networks and Gene Expression Data

Hanisch, D., Zien, A., Zimmer, R., Lengauer, T.

Bioinformatics, (Suppl 1):145S-154S, 18, July 2002 (article)

Abstract
Motivation: Large scale gene expression data are often analysed by clustering genes based on gene expression data alone, though a priori knowledge in the form of biological networks is available. The use of this additional information promises to improve exploratory analysis considerably. Results: We propose constructing a distance function which combines information from expression data and biological networks. Based on this function, we compute a joint clustering of genes and vertices of the network. This general approach is elaborated for metabolic networks. We define a graph distance function on such networks and combine it with a correlation-based distance function for gene expression measurements. A hierarchical clustering and an associated statistical measure is computed to arrive at a reasonable number of clusters. Our method is validated using expression data of the yeast diauxic shift. The resulting clusters are easily interpretable in terms of the biochemical network and the gene expression data and suggest that our method is able to automatically identify processes that are relevant under the measured conditions.

Web [BibTex]

Web [BibTex]


no image
Confidence measures for protein fold recognition

Sommer, I., Zien, A., von Ohsen, N., Zimmer, R., Lengauer, T.

Bioinformatics, 18(6):802-812, June 2002 (article)

[BibTex]

[BibTex]


no image
The contributions of color to recognition memory for natural scenes

Wichmann, F., Sharpe, L., Gegenfurtner, K.

Journal of Experimental Psychology: Learning, Memory and Cognition, 28(3):509-520, May 2002 (article)

Abstract
The authors used a recognition memory paradigm to assess the influence of color information on visual memory for images of natural scenes. Subjects performed 5-10% better for colored than for black-and-white images independent of exposure duration. Experiment 2 indicated little influence of contrast once the images were suprathreshold, and Experiment 3 revealed that performance worsened when images were presented in color and tested in black and white, or vice versa, leading to the conclusion that the surface property color is part of the memory representation. Experiments 4 and 5 exclude the possibility that the superior recognition memory for colored images results solely from attentional factors or saliency. Finally, the recognition memory advantage disappears for falsely colored images of natural scenes: The improvement in recognition memory depends on the color congruence of presented images with learned knowledge about the color gamut found within natural scenes. The results can be accounted for within a multiple memory systems framework.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Nonlinear Multivariate Analysis with Geodesic Kernels

Kuss, M.

Biologische Kybernetik, Technische Universität Berlin, February 2002 (diplomathesis)

GZIP [BibTex]

GZIP [BibTex]


no image
Training invariant support vector machines

DeCoste, D., Schölkopf, B.

Machine Learning, 46(1-3):161-190, January 2002 (article)

Abstract
Practical experience has shown that in order to obtain the best possible performance, prior knowledge about invariances of a classification problem at hand ought to be incorporated into the training procedure. We describe and review all known methods for doing so in support vector machines, provide experimental results, and discuss their respective merits. One of the significant new results reported in this work is our recent achievement of the lowest reported test error on the well-known MNIST digit recognition benchmark task, with SVM training times that are also significantly faster than previous SVM methods.

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Model Selection for Small Sample Regression

Chapelle, O., Vapnik, V., Bengio, Y.

Machine Learning, 48(1-3):9-23, 2002 (article)

Abstract
Model selection is an important ingredient of many machine learning algorithms, in particular when the sample size in small, in order to strike the right trade-off between overfitting and underfitting. Previous classical results for linear regression are based on an asymptotic analysis. We present a new penalization method for performing model selection for regression that is appropriate even for small samples. Our penalization is based on an accurate estimator of the ratio of the expected training error and the expected generalization error, in terms of the expected eigenvalues of the input covariance matrix.

PostScript [BibTex]

PostScript [BibTex]


no image
Concentration Inequalities and Empirical Processes Theory Applied to the Analysis of Learning Algorithms

Bousquet, O.

Biologische Kybernetik, Ecole Polytechnique, 2002 (phdthesis) Accepted

Abstract
New classification algorithms based on the notion of 'margin' (e.g. Support Vector Machines, Boosting) have recently been developed. The goal of this thesis is to better understand how they work, via a study of their theoretical performance. In order to do this, a general framework for real-valued classification is proposed. In this framework, it appears that the natural tools to use are Concentration Inequalities and Empirical Processes Theory. Thanks to an adaptation of these tools, a new measure of the size of a class of functions is introduced, which can be computed from the data. This allows, on the one hand, to better understand the role of eigenvalues of the kernel matrix in Support Vector Machines, and on the other hand, to obtain empirical model selection criteria.

PostScript [BibTex]


no image
Support Vector Machines: Induction Principle, Adaptive Tuning and Prior Knowledge

Chapelle, O.

Biologische Kybernetik, 2002 (phdthesis)

Abstract
This thesis presents a theoretical and practical study of Support Vector Machines (SVM) and related learning algorithms. In a first part, we introduce a new induction principle from which SVMs can be derived, but some new algorithms are also presented in this framework. In a second part, after studying how to estimate the generalization error of an SVM, we suggest to choose the kernel parameters of an SVM by minimizing this estimate. Several applications such as feature selection are presented. Finally the third part deals with the incoporation of prior knowledge in a learning algorithm and more specifically, we studied the case of known invariant transormations and the use of unlabeled data.

GZIP [BibTex]


no image
Contrast discrimination with sinusoidal gratings of different spatial frequency

Bird, C., Henning, G., Wichmann, F.

Journal of the Optical Society of America A, 19(7), pages: 1267-1273, 2002 (article)

Abstract
The detectability of contrast increments was measured as a function of the contrast of a masking or “pedestal” grating at a number of different spatial frequencies ranging from 2 to 16 cycles per degree of visual angle. The pedestal grating always had the same orientation, spatial frequency and phase as the signal. The shape of the contrast increment threshold versus pedestal contrast (TvC) functions depend of the performance level used to define the “threshold,” but when both axes are normalized by the contrast corresponding to 75% correct detection at each frequency, the (TvC) functions at a given performance level are identical. Confidence intervals on the slope of the rising part of the TvC functions are so wide that it is not possible with our data to reject Weber’s Law.

PDF [BibTex]

PDF [BibTex]


no image
A Bennett Concentration Inequality and Its Application to Suprema of Empirical Processes

Bousquet, O.

C. R. Acad. Sci. Paris, Ser. I, 334, pages: 495-500, 2002 (article)

Abstract
We introduce new concentration inequalities for functions on product spaces. They allow to obtain a Bennett type deviation bound for suprema of empirical processes indexed by upper bounded functions. The result is an improvement on Rio's version \cite{Rio01b} of Talagrand's inequality \cite{Talagrand96} for equidistributed variables.

PDF PostScript [BibTex]


no image
Numerical evolution of axisymmetric, isolated systems in general relativity

Frauendiener, J., Hein, M.

Physical Review D, 66, pages: 124004-124004, 2002 (article)

Abstract
We describe in this article a new code for evolving axisymmetric isolated systems in general relativity. Such systems are described by asymptotically flat space-times, which have the property that they admit a conformal extension. We are working directly in the extended conformal manifold and solve numerically Friedrich's conformal field equations, which state that Einstein's equations hold in the physical space-time. Because of the compactness of the conformal space-time the entire space-time can be calculated on a finite numerical grid. We describe in detail the numerical scheme, especially the treatment of the axisymmetry and the boundary.

GZIP [BibTex]

GZIP [BibTex]


no image
Marginalized kernels for biological sequences

Tsuda, K., Kin, T., Asai, K.

Bioinformatics, 18(Suppl 1):268-275, 2002 (article)

PDF [BibTex]

PDF [BibTex]


no image
Stability and Generalization

Bousquet, O., Elisseeff, A.

Journal of Machine Learning Research, 2, pages: 499-526, 2002 (article)

Abstract
We define notions of stability for learning algorithms and show how to use these notions to derive generalization error bounds based on the empirical error and the leave-one-out error. The methods we use can be applied in the regression framework as well as in the classification one when the classifier is obtained by thresholding a real-valued function. We study the stability properties of large classes of learning algorithms such as regularization based algorithms. In particular we focus on Hilbert space regularization and Kullback-Leibler regularization. We demonstrate how to apply the results to SVM for regression and classification.

PDF PostScript [BibTex]

PDF PostScript [BibTex]


no image
Subspace information criterion for non-quadratic regularizers – model selection for sparse regressors

Tsuda, K., Sugiyama, M., Müller, K.

IEEE Trans Neural Networks, 13(1):70-80, 2002 (article)

PDF [BibTex]

PDF [BibTex]


no image
Modeling splicing sites with pairwise correlations

Arita, M., Tsuda, K., Asai, K.

Bioinformatics, 18(Suppl 2):27-34, 2002 (article)

PDF [BibTex]

PDF [BibTex]


no image
Perfusion Quantification using Gaussian Process Deconvolution

Andersen, IK., Szymkowiak, A., Rasmussen, CE., Hanson, LG., Marstrand, JR., Larsson, HBW., Hansen, LK.

Magnetic Resonance in Medicine, (48):351-361, 2002 (article)

Abstract
The quantification of perfusion using dynamic susceptibility contrast MR imaging requires deconvolution to obtain the residual impulse-response function (IRF). Here, a method using a Gaussian process for deconvolution, GPD, is proposed. The fact that the IRF is smooth is incorporated as a constraint in the method. The GPD method, which automatically estimates the noise level in each voxel, has the advantage that model parameters are optimized automatically. The GPD is compared to singular value decomposition (SVD) using a common threshold for the singular values and to SVD using a threshold optimized according to the noise level in each voxel. The comparison is carried out using artificial data as well as using data from healthy volunteers. It is shown that GPD is comparable to SVD variable optimized threshold when determining the maximum of the IRF, which is directly related to the perfusion. GPD provides a better estimate of the entire IRF. As the signal to noise ratio increases or the time resolution of the measurements increases, GPD is shown to be superior to SVD. This is also found for large distribution volumes.

PDF PostScript [BibTex]

PDF PostScript [BibTex]


no image
Tracking a Small Set of Experts by Mixing Past Posteriors

Bousquet, O., Warmuth, M.

Journal of Machine Learning Research, 3, pages: 363-396, (Editors: Long, P.), 2002 (article)

Abstract
In this paper, we examine on-line learning problems in which the target concept is allowed to change over time. In each trial a master algorithm receives predictions from a large set of n experts. Its goal is to predict almost as well as the best sequence of such experts chosen off-line by partitioning the training sequence into k+1 sections and then choosing the best expert for each section. We build on methods developed by Herbster and Warmuth and consider an open problem posed by Freund where the experts in the best partition are from a small pool of size m. Since k >> m, the best expert shifts back and forth between the experts of the small pool. We propose algorithms that solve this open problem by mixing the past posteriors maintained by the master algorithm. We relate the number of bits needed for encoding the best partition to the loss bounds of the algorithms. Instead of paying log n for choosing the best expert in each section we first pay log (n choose m) bits in the bounds for identifying the pool of m experts and then log m bits per new section. In the bounds we also pay twice for encoding the boundaries of the sections.

PDF PostScript [BibTex]

PDF PostScript [BibTex]


no image
A femoral arteriovenous shunt facilitates arterial whole blood sampling in animals

Weber, B., Burger, C., Biro, P., Buck, A.

Eur J Nucl Med Mol Imaging, 29, pages: 319-323, 2002 (article)

[BibTex]

[BibTex]


no image
Contrast discrimination with pulse-trains in pink noise

Henning, G., Bird, C., Wichmann, F.

Journal of the Optical Society of America A, 19(7), pages: 1259-1266, 2002 (article)

Abstract
Detection performance was measured with sinusoidal and pulse-train gratings. Although the 2.09-c/deg pulse-train, or line gratings, contained at least 8 harmonics all at equal contrast, they were no more detectable than their most detectable component. The addition of broadband pink noise designed to equalize the detectability of the components of the pulse train made the pulse train about a factor of four more detectable than any of its components. However, in contrast-discrimination experiments, with a pedestal or masking grating of the same form and phase as the signal and 15% contrast, the noise did not affect the discrimination performance of the pulse train relative to that obtained with its sinusoidal components. We discuss the implications of these observations for models of early vision in particular the implications for possible sources of internal noise.

PDF [BibTex]

PDF [BibTex]


no image
Choosing Multiple Parameters for Support Vector Machines

Chapelle, O., Vapnik, V., Bousquet, O., Mukherjee, S.

Machine Learning, 46(1):131-159, 2002 (article)

Abstract
The problem of automatically tuning multiple parameters for pattern recognition Support Vector Machines (SVM) is considered. This is done by minimizing some estimates of the generalization error of SVMs using a gradient descent algorithm over the set of parameters. Usual methods for choosing parameters, based on exhaustive search become intractable as soon as the number of parameters exceeds two. Some experimental results assess the feasibility of our approach for a large number of parameters (more than 100) and demonstrate an improvement of generalization performance.

PDF PostScript [BibTex]

PDF PostScript [BibTex]

1999


no image
Some Aspects of Modelling Human Spatial Vision: Contrast Discrimination

Wichmann, F.

University of Oxford, University of Oxford, October 1999 (phdthesis)

[BibTex]

1999

[BibTex]


no image
Lernen mit Kernen: Support-Vektor-Methoden zur Analyse hochdimensionaler Daten

Schölkopf, B., Müller, K., Smola, A.

Informatik - Forschung und Entwicklung, 14(3):154-163, September 1999 (article)

Abstract
We describe recent developments and results of statistical learning theory. In the framework of learning from examples, two factors control generalization ability: explaining the training data by a learning machine of a suitable complexity. We describe kernel algorithms in feature spaces as elegant and efficient methods of realizing such machines. Examples thereof are Support Vector Machines (SVM) and Kernel PCA (Principal Component Analysis). More important than any individual example of a kernel algorithm, however, is the insight that any algorithm that can be cast in terms of dot products can be generalized to a nonlinear setting using kernels. Finally, we illustrate the significance of kernel algorithms by briefly describing industrial and academic applications, including ones where we obtained benchmark record results.

PDF PDF DOI [BibTex]

PDF PDF DOI [BibTex]


no image
Input space versus feature space in kernel-based methods

Schölkopf, B., Mika, S., Burges, C., Knirsch, P., Müller, K., Rätsch, G., Smola, A.

IEEE Transactions On Neural Networks, 10(5):1000-1017, September 1999 (article)

Abstract
This paper collects some ideas targeted at advancing our understanding of the feature spaces associated with support vector (SV) kernel functions. We first discuss the geometry of feature space. In particular, we review what is known about the shape of the image of input space under the feature space map, and how this influences the capacity of SV methods. Following this, we describe how the metric governing the intrinsic geometry of the mapped surface can be computed in terms of the kernel, using the example of the class of inhomogeneous polynomial kernels, which are often used in SV pattern recognition. We then discuss the connection between feature space and input space by dealing with the question of how one can, given some vector in feature space, find a preimage (exact or approximate) in input space. We describe algorithms to tackle this issue, and show their utility in two applications of kernel methods. First, we use it to reduce the computational complexity of SV decision functions; second, we combine it with the kernel PCA algorithm, thereby constructing a nonlinear statistical denoising technique which is shown to perform well on real-world data.

Web DOI [BibTex]

Web DOI [BibTex]


no image
p73 and p63 are homotetramers capable of weak heterotypic interactions with each other but not with p53.

Davison, T., Vagner, C., Kaghad, M., Ayed, A., Caput, D., CH, ..

Journal of Biological Chemistry, 274(26):18709-18714, June 1999 (article)

Abstract
Mutations in the p53 tumor suppressor gene are the most frequent genetic alterations found in human cancers. Recent identification of two human homologues of p53 has raised the prospect of functional interactions between family members via a conserved oligomerization domain. Here we report in vitro and in vivo analysis of homo- and hetero-oligomerization of p53 and its homologues, p63 and p73. The oligomerization domains of p63 and p73 can independently fold into stable homotetramers, as previously observed for p53. However, the oligomerization domain of p53 does not associate with that of either p73 or p63, even when p53 is in 15-fold excess. On the other hand, the oligomerization domains of p63 and p73 are able to weakly associate with one another in vitro. In vivo co-transfection assays of the ability of p53 and its homologues to activate reporter genes showed that a DNA-binding mutant of p53 was not able to act in a dominant negative manner over wild-type p73 or p63 but that a p73 mutant could inhibit the activity of wild-type p63. These data suggest that mutant p53 in cancer cells will not interact with endogenous or exogenous p63 or p73 via their respective oligomerization domains. It also establishes that the multiple isoforms of p63 as well as those of p73 are capable of interacting via their common oligomerization domain.

Web [BibTex]

Web [BibTex]


no image
Spatial Learning and Localization in Animals: A Computational Model and Its Implications for Mobile Robots

Balakrishnan, K., Bousquet, O., Honavar, V.

Adaptive Behavior, 7(2):173-216, 1999 (article)

[BibTex]


no image
SVMs for Histogram Based Image Classification

Chapelle, O., Haffner, P., Vapnik, V.

IEEE Transactions on Neural Networks, (9), 1999 (article)

Abstract
Traditional classification approaches generalize poorly on image classification tasks, because of the high dimensionality of the feature space. This paper shows that Support Vector Machines (SVM) can generalize well on difficult image classification problems where the only features are high dimensional histograms. Heavy-tailed RBF kernels of the form $K(mathbf{x},mathbf{y})=e^{-rhosum_i |x_i^a-y_i^a|^{b}}$ with $aleq 1$ and $b leq 2$ are evaluated on the classification of images extracted from the Corel Stock Photo Collection and shown to far outperform traditional polynomial or Gaussian RBF kernels. Moreover, we observed that a simple remapping of the input $x_i rightarrow x_i^a$ improves the performance of linear SVMs to such an extend that it makes them, for this problem, a valid alternative to RBF kernels.

GZIP [BibTex]

GZIP [BibTex]


no image
Apprentissage Automatique et Simplicite

Bousquet, O.

Biologische Kybernetik, 1999, In french (diplomathesis)

PostScript [BibTex]

PostScript [BibTex]


no image
Machine Learning and Language Acquisition: A Model of Child’s Learning of Turkish Morphophonology

Altun, Y.

Middle East Technical University, Ankara, Turkey, 1999 (mastersthesis)

[BibTex]