Header logo is ei


2002


no image
Stability and Generalization

Bousquet, O., Elisseeff, A.

Journal of Machine Learning Research, 2, pages: 499-526, 2002 (article)

Abstract
We define notions of stability for learning algorithms and show how to use these notions to derive generalization error bounds based on the empirical error and the leave-one-out error. The methods we use can be applied in the regression framework as well as in the classification one when the classifier is obtained by thresholding a real-valued function. We study the stability properties of large classes of learning algorithms such as regularization based algorithms. In particular we focus on Hilbert space regularization and Kullback-Leibler regularization. We demonstrate how to apply the results to SVM for regression and classification.

PDF PostScript [BibTex]

2002

PDF PostScript [BibTex]


no image
Subspace information criterion for non-quadratic regularizers – model selection for sparse regressors

Tsuda, K., Sugiyama, M., Müller, K.

IEEE Trans Neural Networks, 13(1):70-80, 2002 (article)

PDF [BibTex]

PDF [BibTex]


no image
Modeling splicing sites with pairwise correlations

Arita, M., Tsuda, K., Asai, K.

Bioinformatics, 18(Suppl 2):27-34, 2002 (article)

PDF [BibTex]

PDF [BibTex]


no image
Perfusion Quantification using Gaussian Process Deconvolution

Andersen, IK., Szymkowiak, A., Rasmussen, CE., Hanson, LG., Marstrand, JR., Larsson, HBW., Hansen, LK.

Magnetic Resonance in Medicine, (48):351-361, 2002 (article)

Abstract
The quantification of perfusion using dynamic susceptibility contrast MR imaging requires deconvolution to obtain the residual impulse-response function (IRF). Here, a method using a Gaussian process for deconvolution, GPD, is proposed. The fact that the IRF is smooth is incorporated as a constraint in the method. The GPD method, which automatically estimates the noise level in each voxel, has the advantage that model parameters are optimized automatically. The GPD is compared to singular value decomposition (SVD) using a common threshold for the singular values and to SVD using a threshold optimized according to the noise level in each voxel. The comparison is carried out using artificial data as well as using data from healthy volunteers. It is shown that GPD is comparable to SVD variable optimized threshold when determining the maximum of the IRF, which is directly related to the perfusion. GPD provides a better estimate of the entire IRF. As the signal to noise ratio increases or the time resolution of the measurements increases, GPD is shown to be superior to SVD. This is also found for large distribution volumes.

PDF PostScript [BibTex]

PDF PostScript [BibTex]


no image
Tracking a Small Set of Experts by Mixing Past Posteriors

Bousquet, O., Warmuth, M.

Journal of Machine Learning Research, 3, pages: 363-396, (Editors: Long, P.), 2002 (article)

Abstract
In this paper, we examine on-line learning problems in which the target concept is allowed to change over time. In each trial a master algorithm receives predictions from a large set of n experts. Its goal is to predict almost as well as the best sequence of such experts chosen off-line by partitioning the training sequence into k+1 sections and then choosing the best expert for each section. We build on methods developed by Herbster and Warmuth and consider an open problem posed by Freund where the experts in the best partition are from a small pool of size m. Since k >> m, the best expert shifts back and forth between the experts of the small pool. We propose algorithms that solve this open problem by mixing the past posteriors maintained by the master algorithm. We relate the number of bits needed for encoding the best partition to the loss bounds of the algorithms. Instead of paying log n for choosing the best expert in each section we first pay log (n choose m) bits in the bounds for identifying the pool of m experts and then log m bits per new section. In the bounds we also pay twice for encoding the boundaries of the sections.

PDF PostScript [BibTex]

PDF PostScript [BibTex]


no image
A femoral arteriovenous shunt facilitates arterial whole blood sampling in animals

Weber, B., Burger, C., Biro, P., Buck, A.

Eur J Nucl Med Mol Imaging, 29, pages: 319-323, 2002 (article)

[BibTex]

[BibTex]


no image
Contrast discrimination with pulse-trains in pink noise

Henning, G., Bird, C., Wichmann, F.

Journal of the Optical Society of America A, 19(7), pages: 1259-1266, 2002 (article)

Abstract
Detection performance was measured with sinusoidal and pulse-train gratings. Although the 2.09-c/deg pulse-train, or line gratings, contained at least 8 harmonics all at equal contrast, they were no more detectable than their most detectable component. The addition of broadband pink noise designed to equalize the detectability of the components of the pulse train made the pulse train about a factor of four more detectable than any of its components. However, in contrast-discrimination experiments, with a pedestal or masking grating of the same form and phase as the signal and 15% contrast, the noise did not affect the discrimination performance of the pulse train relative to that obtained with its sinusoidal components. We discuss the implications of these observations for models of early vision in particular the implications for possible sources of internal noise.

PDF [BibTex]

PDF [BibTex]


no image
Choosing Multiple Parameters for Support Vector Machines

Chapelle, O., Vapnik, V., Bousquet, O., Mukherjee, S.

Machine Learning, 46(1):131-159, 2002 (article)

Abstract
The problem of automatically tuning multiple parameters for pattern recognition Support Vector Machines (SVM) is considered. This is done by minimizing some estimates of the generalization error of SVMs using a gradient descent algorithm over the set of parameters. Usual methods for choosing parameters, based on exhaustive search become intractable as soon as the number of parameters exceeds two. Some experimental results assess the feasibility of our approach for a large number of parameters (more than 100) and demonstrate an improvement of generalization performance.

PDF PostScript [BibTex]

PDF PostScript [BibTex]

2001


no image
Anabolic and Catabolic Gene Expression Pattern Analysis in Normal Versus Osteoarthritic Cartilage Using Complementary DNA-Array Technology

Aigner, T., Zien, A., Gehrsitz, A., Gebhard, P., McKenna, L.

Arthritis and Rheumatism, 44(12):2777-2789, December 2001 (article)

Web [BibTex]

2001

Web [BibTex]


no image
Generalization performance of regularization networks and support vector machines via entropy numbers of compact operators

Williamson, R., Smola, A., Schölkopf, B.

IEEE Transactions on Information Theory, 47(6):2516-2532, September 2001 (article)

Abstract
We derive new bounds for the generalization error of kernel machines, such as support vector machines and related regularization networks by obtaining new bounds on their covering numbers. The proofs make use of a viewpoint that is apparently novel in the field of statistical learning theory. The hypothesis class is described in terms of a linear operator mapping from a possibly infinite-dimensional unit ball in feature space into a finite-dimensional space. The covering numbers of the class are then determined via the entropy numbers of the operator. These numbers, which characterize the degree of compactness of the operator can be bounded in terms of the eigenvalues of an integral operator induced by the kernel function used by the machine. As a consequence, we are able to theoretically explain the effect of the choice of kernel function on the generalization performance of support vector machines.

DOI [BibTex]

DOI [BibTex]


no image
Centralization: A new method for the normalization of gene expression data

Zien, A., Aigner, T., Zimmer, R., Lengauer, T.

Bioinformatics, 17, pages: S323-S331, June 2001, Mathematical supplement available at http://citeseer.ist.psu.edu/574280.html (article)

Abstract
Microarrays measure values that are approximately proportional to the numbers of copies of different mRNA molecules in samples. Due to technical difficulties, the constant of proportionality between the measured intensities and the numbers of mRNA copies per cell is unknown and may vary for different arrays. Usually, the data are normalized (i.e., array-wise multiplied by appropriate factors) in order to compensate for this effect and to enable informative comparisons between different experiments. Centralization is a new two-step method for the computation of such normalization factors that is both biologically better motivated and more robust than standard approaches. First, for each pair of arrays the quotient of the constants of proportionality is estimated. Second, from the resulting matrix of pairwise quotients an optimally consistent scaling of the samples is computed.

PDF PostScript Web [BibTex]

PDF PostScript Web [BibTex]


no image
Regularized principal manifolds

Smola, A., Mika, S., Schölkopf, B., Williamson, R.

Journal of Machine Learning Research, 1, pages: 179-209, June 2001 (article)

Abstract
Many settings of unsupervised learning can be viewed as quantization problems - the minimization of the expected quantization error subject to some restrictions. This allows the use of tools such as regularization from the theory of (supervised) risk minimization for unsupervised learning. This setting turns out to be closely related to principal curves, the generative topographic map, and robust coding. We explore this connection in two ways: (1) we propose an algorithm for finding principal manifolds that can be regularized in a variety of ways; and (2) we derive uniform convergence bounds and hence bounds on the learning rates of the algorithm. In particular, we give bounds on the covering numbers which allows us to obtain nearly optimal learning rates for certain types of regularization operators. Experimental results demonstrate the feasibility of the approach.

PDF [BibTex]

PDF [BibTex]


no image
Failure Diagnosis of Discrete Event Systems

Son, HI., Kim, KW., Lee, S.

Journal of Control, Automation and Systems Engineering, 7(5):375-383, May 2001, In Korean (article)

[BibTex]

[BibTex]


no image
Pattern Selection Using the Bias and Variance of Ensemble

Shin, H., Cho, S.

Journal of the Korean Institute of Industrial Engineers, 28(1):112-127, March 2001 (article)

Abstract
[Abstract]: A useful pattern is a pattern that contributes much to learning. For a classification problem those patterns near the class boundary surfaces carry more information to the classifier. For a regression problem the ones near the estimated surface carry more information. In both cases, the usefulness is defined only for those patterns either without error or with negligible error. Using only the useful patterns gives several benefits. First, computational complexity in memory and time for learning is decreased. Second, overfitting is avoided even when the learner is over-sized. Third, learning results in more stable learners. In this paper, we propose a pattern “utility index” that measures the utility of an individual pattern. The utility index is based on the bias and variance of a pattern trained by a network ensemble. In classification, the pattern with a low bias and a high variance gets a high score. In regression, on the other hand, the one with a low bias and a low variance gets a high score. Based on the distribution of the utility index, the original training set is divided into a high-score group and a low-score group. Only the high-score group is then used for training. The proposed method is tested on synthetic and real-world benchmark datasets. The proposed approach gives a better or at least similar performance.

[BibTex]

[BibTex]


no image
Structure and Functionality of a Designed p53 Dimer.

Davison, TS., Nie, X., Ma, W., Lin, Y., Kay, C., Benchimol, S., Arrowsmith, C.

Journal of Molecular Biology, 307(2):605-617, March 2001 (article)

Abstract
P53 is a homotetrameric tumor suppressor protein involved in transcriptional control of genes that regulate cell proliferation and death. In order to probe the role that oligomerization plays in this capacity, we have previously designed and characterized a series of p53 proteins with altered oligomeric states through hydrophilc substitution of residues Met340 or Leu344 in the normally tetrameric oligomerization domain. Although such mutations have little effect on the overall secondary structural content of the oligomerization domain, both solubility and the resistance to thermal denaturation are substantially reduced relative to that of the wild-type domain. Here, we report the design and characterization of a double-mutant p53 with alterations of residues at positions Met340 and Leu344. The double-mutations Met340Glu/Leu344Lys and Met340Gln/Leu344Arg resulted in distinct dimeric forms of the protein. Furthermore, we have verified by NMR structure determination that the double-mutant Met340Gln/Leu344Arg is essentially a "half-tetramer". Analysis of the in vivo activities of full-length p53 oligomeric mutants reveals that while cell-cycle arrest requires tetrameric p53, transcriptional transactivation activity of monomers and dimers retain roughly background and half of the wild-type activity, respectively.

Web [BibTex]

Web [BibTex]


no image
An Introduction to Kernel-Based Learning Algorithms

Müller, K., Mika, S., Rätsch, G., Tsuda, K., Schölkopf, B.

IEEE Transactions on Neural Networks, 12(2):181-201, March 2001 (article)

Abstract
This paper provides an introduction to support vector machines, kernel Fisher discriminant analysis, and kernel principal component analysis, as examples for successful kernel-based learning methods. We first give a short background about Vapnik-Chervonenkis theory and kernel feature spaces and then proceed to kernel based learning in supervised and unsupervised scenarios including practical and algorithmic considerations. We illustrate the usefulness of kernel algorithms by discussing applications such as optical character recognition and DNA analysis

DOI [BibTex]

DOI [BibTex]


no image
Estimating the support of a high-dimensional distribution.

Schölkopf, B., Platt, J., Shawe-Taylor, J., Smola, A., Williamson, R.

Neural Computation, 13(7):1443-1471, March 2001 (article)

Abstract
Suppose you are given some data set drawn from an underlying probability distribution P and you want to estimate a “simple” subset S of input space such that the probability that a test point drawn from P lies outside of S equals some a priori specified value between 0 and 1. We propose a method to approach this problem by trying to estimate a function f that is positive on S and negative on the complement. The functional form of f is given by a kernel expansion in terms of a potentially small subset of the training data; it is regularized by controlling the length of the weight vector in an associated feature space. The expansion coefficients are found by solving a quadratic programming problem, which we do by carrying out sequential optimization over pairs of input patterns. We also provide a theoretical analysis of the statistical performance of our algorithm. The algorithm is a natural extension of the support vector algorithm to the case of unlabeled data.

Web DOI [BibTex]

Web DOI [BibTex]


no image
The psychometric function: II. Bootstrap-based confidence intervals and sampling

Wichmann, F., Hill, N.

Perception and Psychophysics, 63 (8), pages: 1314-1329, 2001 (article)

PDF [BibTex]

PDF [BibTex]


no image
The psychometric function: I. Fitting, sampling and goodness-of-fit

Wichmann, F., Hill, N.

Perception and Psychophysics, 63 (8), pages: 1293-1313, 2001 (article)

Abstract
The psychometric function relates an observer'sperformance to an independent variable, usually some physical quantity of a stimulus in a psychophysical task. This paper, together with its companion paper (Wichmann & Hill, 2001), describes an integrated approach to (1) fitting psychometric functions, (2) assessing the goodness of fit, and (3) providing confidence intervals for the function'sparameters and other estimates derived from them, for the purposes of hypothesis testing. The present paper deals with the first two topics, describing a constrained maximum-likelihood method of parameter estimation and developing several goodness-of-fit tests. Using Monte Carlo simulations, we deal with two specific difficulties that arise when fitting functions to psychophysical data. First, we note that human observers are prone to stimulus-independent errors (or lapses ). We show that failure to account for this can lead to serious biases in estimates of the psychometric function'sparameters and illustrate how the problem may be overcome. Second, we note that psychophysical data sets are usually rather small by the standards required by most of the commonly applied statistical tests. We demonstrate the potential errors of applying traditional X^2 methods to psychophysical data and advocate use of Monte Carlo resampling techniques that do not rely on asymptotic theory. We have made available the software to implement our methods

PDF [BibTex]

PDF [BibTex]


no image
The control structure of artificial creatures

Zhou, D., Dai, R.

Artificial Life and Robotics, 5(3), 2001, invited article (article)

Web [BibTex]

Web [BibTex]


no image
Markovian domain fingerprinting: statistical segmentation of protein sequences

Bejerano, G., Seldin, Y., Margalit, H., Tishby, N.

Bioinformatics, 17(10):927-934, 2001 (article)

PDF Web [BibTex]

PDF Web [BibTex]

1999


no image
Lernen mit Kernen: Support-Vektor-Methoden zur Analyse hochdimensionaler Daten

Schölkopf, B., Müller, K., Smola, A.

Informatik - Forschung und Entwicklung, 14(3):154-163, September 1999 (article)

Abstract
We describe recent developments and results of statistical learning theory. In the framework of learning from examples, two factors control generalization ability: explaining the training data by a learning machine of a suitable complexity. We describe kernel algorithms in feature spaces as elegant and efficient methods of realizing such machines. Examples thereof are Support Vector Machines (SVM) and Kernel PCA (Principal Component Analysis). More important than any individual example of a kernel algorithm, however, is the insight that any algorithm that can be cast in terms of dot products can be generalized to a nonlinear setting using kernels. Finally, we illustrate the significance of kernel algorithms by briefly describing industrial and academic applications, including ones where we obtained benchmark record results.

PDF PDF DOI [BibTex]

1999

PDF PDF DOI [BibTex]


no image
Input space versus feature space in kernel-based methods

Schölkopf, B., Mika, S., Burges, C., Knirsch, P., Müller, K., Rätsch, G., Smola, A.

IEEE Transactions On Neural Networks, 10(5):1000-1017, September 1999 (article)

Abstract
This paper collects some ideas targeted at advancing our understanding of the feature spaces associated with support vector (SV) kernel functions. We first discuss the geometry of feature space. In particular, we review what is known about the shape of the image of input space under the feature space map, and how this influences the capacity of SV methods. Following this, we describe how the metric governing the intrinsic geometry of the mapped surface can be computed in terms of the kernel, using the example of the class of inhomogeneous polynomial kernels, which are often used in SV pattern recognition. We then discuss the connection between feature space and input space by dealing with the question of how one can, given some vector in feature space, find a preimage (exact or approximate) in input space. We describe algorithms to tackle this issue, and show their utility in two applications of kernel methods. First, we use it to reduce the computational complexity of SV decision functions; second, we combine it with the kernel PCA algorithm, thereby constructing a nonlinear statistical denoising technique which is shown to perform well on real-world data.

Web DOI [BibTex]

Web DOI [BibTex]


no image
p73 and p63 are homotetramers capable of weak heterotypic interactions with each other but not with p53.

Davison, T., Vagner, C., Kaghad, M., Ayed, A., Caput, D., CH, ..

Journal of Biological Chemistry, 274(26):18709-18714, June 1999 (article)

Abstract
Mutations in the p53 tumor suppressor gene are the most frequent genetic alterations found in human cancers. Recent identification of two human homologues of p53 has raised the prospect of functional interactions between family members via a conserved oligomerization domain. Here we report in vitro and in vivo analysis of homo- and hetero-oligomerization of p53 and its homologues, p63 and p73. The oligomerization domains of p63 and p73 can independently fold into stable homotetramers, as previously observed for p53. However, the oligomerization domain of p53 does not associate with that of either p73 or p63, even when p53 is in 15-fold excess. On the other hand, the oligomerization domains of p63 and p73 are able to weakly associate with one another in vitro. In vivo co-transfection assays of the ability of p53 and its homologues to activate reporter genes showed that a DNA-binding mutant of p53 was not able to act in a dominant negative manner over wild-type p73 or p63 but that a p73 mutant could inhibit the activity of wild-type p63. These data suggest that mutant p53 in cancer cells will not interact with endogenous or exogenous p63 or p73 via their respective oligomerization domains. It also establishes that the multiple isoforms of p63 as well as those of p73 are capable of interacting via their common oligomerization domain.

Web [BibTex]

Web [BibTex]


no image
Spatial Learning and Localization in Animals: A Computational Model and Its Implications for Mobile Robots

Balakrishnan, K., Bousquet, O., Honavar, V.

Adaptive Behavior, 7(2):173-216, 1999 (article)

[BibTex]


no image
SVMs for Histogram Based Image Classification

Chapelle, O., Haffner, P., Vapnik, V.

IEEE Transactions on Neural Networks, (9), 1999 (article)

Abstract
Traditional classification approaches generalize poorly on image classification tasks, because of the high dimensionality of the feature space. This paper shows that Support Vector Machines (SVM) can generalize well on difficult image classification problems where the only features are high dimensional histograms. Heavy-tailed RBF kernels of the form $K(mathbf{x},mathbf{y})=e^{-rhosum_i |x_i^a-y_i^a|^{b}}$ with $aleq 1$ and $b leq 2$ are evaluated on the classification of images extracted from the Corel Stock Photo Collection and shown to far outperform traditional polynomial or Gaussian RBF kernels. Moreover, we observed that a simple remapping of the input $x_i rightarrow x_i^a$ improves the performance of linear SVMs to such an extend that it makes them, for this problem, a valid alternative to RBF kernels.

GZIP [BibTex]

GZIP [BibTex]

1998


no image
Book Review: An Introduction to Fuzzy Logic for Practical Applications

Peters, J.

K{\"u}nstliche Intelligenz (KI), 98(4):60-60, November 1998 (article)

[BibTex]

1998

[BibTex]


no image
Where did I take that snapshot? Scene-based homing by image matching

Franz, M., Schölkopf, B., Bülthoff, H.

Biological Cybernetics, 79(3):191-202, October 1998 (article)

Abstract
In homing tasks, the goal is often not marked by visible objects but must be inferred from the spatial relation to the visual cues in the surrounding scene. The exact computation of the goal direction would require knowledge about the distances to visible landmarks, information, which is not directly available to passive vision systems. However, if prior assumptions about typical distance distributions are used, a snapshot taken at the goal suffices to compute the goal direction from the current view. We show that most existing approaches to scene-based homing implicitly assume an isotropic landmark distribution. As an alternative, we propose a homing scheme that uses parameterized displacement fields. These are obtained from an approximation that incorporates prior knowledge about perspective distortions of the visual environment. A mathematical analysis proves that both approximations do not prevent the schemes from approaching the goal with arbitrary accuracy, but lead to different errors in the computed goal direction. Mobile robot experiments are used to test the theoretical predictions and to demonstrate the practical feasibility of the new approach.

PDF PDF DOI [BibTex]

PDF PDF DOI [BibTex]


no image
On a Kernel-Based Method for Pattern Recognition, Regression, Approximation, and Operator Inversion

Smola, A., Schölkopf, B.

Algorithmica, 22(1-2):211-231, September 1998 (article)

Abstract
We present a kernel-based framework for pattern recognition, regression estimation, function approximation, and multiple operator inversion. Adopting a regularization-theoretic framework, the above are formulated as constrained optimization problems. Previous approaches such as ridge regression, support vector methods, and regularization networks are included as special cases. We show connections between the cost function and some properties up to now believed to apply to support vector machines only. For appropriately chosen cost functions, the optimal solution of all the problems described above can be found by solving a simple quadratic programming problem.

PDF DOI [BibTex]


no image
The moon tilt illusion

Schölkopf, B.

Perception, 27(10):1229-1232, August 1998 (article)

Abstract
Besides the familiar moon illusion [eg Hershenson, 1989 The Moon illusion (Hillsdale, NJ: Lawrence Erlbaum Associates)], wherein the moon appears bigger when it is close to the horizon, there is a less known illusion which causes the moon‘s illuminated side to appear turned away from the direction of the sun. An experiment documenting the effect is described, and a possible explanation is put forward.

Web DOI [BibTex]

Web DOI [BibTex]


no image
Characterization of the oligomerization defects of two p53 mutants found in families with Li-Fraumeni and Li-Fraumeni-like syndrome.

Davison, T., Yin, P., Nie, E., Kay, C., CH, ..

Oncogene, 17(5):651-656, August 1998 (article)

Abstract
Recently two germline mutations in the oligomerization domain of p53 have been identified in patients with Li-Fraumeni and Li-Fraumeni-like Syndromes. We have used biophysical and biochemical methods to characterize these two mutants in order to better understand their functional defects and the role of the p53 oligomerization domain (residues 325-355) in oncogenesis. We find that residues 310-360 of the L344P mutant are monomeric, apparently unfolded and cannot interact with wild-type (WT) p53. The full length L344P protein is unable to bind sequence specifically to DNA and is therefore an inactive, but not a dominant negative mutant. R337C, on the other hand, can form dimers and tetramers, can hetero-oligomerize with WTp53 and can bind to a p53 consensus element. However, the thermal stability of R337C is much lower than that of WTp53 and at physiological temperatures more than half of this mutant is less than tetrameric. Thus, the R337C mutant retains some functional activity yet leads to a predisposition to cancer, suggesting that even partial inactivation of p53 oligomerization is sufficient for accelerated tumour progression.

Web [BibTex]


no image
Nonlinear Component Analysis as a Kernel Eigenvalue Problem

Schölkopf, B., Smola, A., Müller, K.

Neural Computation, 10(5):1299-1319, July 1998 (article)

Abstract
A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map—for instance, the space of all possible five-pixel products in 16 × 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.

Web DOI [BibTex]

Web DOI [BibTex]


no image
SVMs — a practical consequence of learning theory

Schölkopf, B.

IEEE Intelligent Systems and their Applications, 13(4):18-21, July 1998 (article)

Abstract
My first exposure to Support Vector Machines came this spring when heard Sue Dumais present impressive results on text categorization using this analysis technique. This issue's collection of essays should help familiarize our readers with this interesting new racehorse in the Machine Learning stable. Bernhard Scholkopf, in an introductory overview, points out that a particular advantage of SVMs over other learning algorithms is that it can be analyzed theoretically using concepts from computational learning theory, and at the same time can achieve good performance when applied to real problems. Examples of these real-world applications are provided by Sue Dumais, who describes the aforementioned text-categorization problem, yielding the best results to date on the Reuters collection, and Edgar Osuna, who presents strong results on application to face detection. Our fourth author, John Platt, gives us a practical guide and a new technique for implementing the algorithm efficiently.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Support vector machines

Hearst, M., Dumais, S., Osman, E., Platt, J., Schölkopf, B.

IEEE Intelligent Systems and their Applications, 13(4):18-28, July 1998 (article)

Abstract
My first exposure to Support Vector Machines came this spring when heard Sue Dumais present impressive results on text categorization using this analysis technique. This issue's collection of essays should help familiarize our readers with this interesting new racehorse in the Machine Learning stable. Bernhard Scholkopf, in an introductory overview, points out that a particular advantage of SVMs over other learning algorithms is that it can be analyzed theoretically using concepts from computational learning theory, and at the same time can achieve good performance when applied to real problems. Examples of these real-world applications are provided by Sue Dumais, who describes the aforementioned text-categorization problem, yielding the best results to date on the Reuters collection, and Edgar Osuna, who presents strong results on application to face detection. Our fourth author, John Platt, gives us a practical guide and a new technique for implementing the algorithm efficiently.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
The connection between regularization operators and support vector kernels.

Smola, A., Schölkopf, B., Müller, K.

Neural Networks, 11(4):637-649, June 1998 (article)

Abstract
n this paper a correspondence is derived between regularization operators used in regularization networks and support vector kernels. We prove that the Green‘s Functions associated with regularization operators are suitable support vector kernels with equivalent regularization properties. Moreover, the paper provides an analysis of currently used support vector kernels in the view of regularization theory and corresponding operators associated with the classes of both polynomial kernels and translation invariant kernels. The latter are also analyzed on periodical domains. As a by-product we show that a large number of radial basis functions, namely conditionally positive definite functions, may be used as support vector kernels.

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Learning view graphs for robot navigation

Franz, M., Schölkopf, B., Mallot, H., Bülthoff, H.

Autonomous Robots, 5(1):111-125, March 1998 (article)

Abstract
We present a purely vision-based scheme for learning a topological representation of an open environment. The system represents selected places by local views of the surrounding scene, and finds traversable paths between them. The set of recorded views and their connections are combined into a graph model of the environment. To navigate between views connected in the graph, we employ a homing strategy inspired by findings of insect ethology. In robot experiments, we demonstrate that complex visual exploration and navigation tasks can thus be performed without using metric information.

PDF PDF DOI [BibTex]

PDF PDF DOI [BibTex]


no image
No role for motion blur in either motion detection or motion based image segmentation

Wichmann, F., Henning, G.

Journal of the Optical Society of America A, 15 (2), pages: 297-306, 1998 (article)

Abstract
Determined the influence of high-spatial-frequency losses induced by motion on motion detection and on motion-based image segmentation. Motion detection and motion-based segmentation tasks were performed with either spectrally low-pass or spectrally broadband stimuli. Performance on these tasks was compared with a condition having no motion but in which form differences mimicked the perceptual loss of high spatial frequencies produced by motion. This allowed the relative salience of motion and motion-induced blur to be determined. Neither image segmentation nor motion detection was sensitive to the high-spatial-frequency content of the stimuli. Thus the change in perceptual form produced in moving stimuli is not normally used as a cue either for motion detection or for motion-based image segmentation in ordinary situations.

PDF [BibTex]

PDF [BibTex]


no image
PET with 18fluorodeoxyglucose and hexamethylpropylene amine oxime SPECT in late whiplash syndrome

Bicik, I., Radanov, B., Schaefer, N., Dvorak, J., Blum, B., Weber, B., Burger, C., von Schulthess, G., Buck, A.

Neurology, 51, pages: 345-350, 1998 (article)

[BibTex]

[BibTex]


no image
Changes of cerebral blood flow during short-term exposure to normobaric hypoxia

Buck, A., Schirlo, C., Jasinsky, V., Weber, B., Burger, C., von Schulthess, G., Koller, E., Pavlicek, V.

J Cereb Blood Flow Metab, 18, pages: 906-910, 1998 (article)

[BibTex]

[BibTex]


no image
Support Vector methods in learning and feature extraction

Schölkopf, B., Smola, A., Müller, K., Burges, C., Vapnik, V.

Ninth Australian Conference on Neural Networks, pages: 72-78, (Editors: T. Downs, M. Frean and M. Gallagher), 1998 (talk)

[BibTex]

[BibTex]


no image
Funktionelle Magnetresonanztomographie in der psychopathologischen Forschung.

Spitzer, M., Kammer, T., Bellemann, M., Brix, G., Layer, B., Maier, S., Kischka, U., Gückel, F.

Fortschritte der Neurologie Psychiatrie, 66, pages: 241-258, 1998 (article)

Abstract
Mental disorders are characterised by psychopathological symptoms which correspond to functional brain states. Functional magnetic resonance imaging (fMRI) is used for the non-invasive study of cerebral activation patterns in man. First of all, the neurobiological principles and presuppositions of the method are outlined. Results from the Heidelberg imaging lab on several simple sensorimotor tasks as well as higher cognitive functions, such as working and semantic memory, are then presented. Thereafter, results from preliminary fMRI studies of psychopathological symptoms are discussed, with emphasis on hallucinations, psychomotoric phenomena, emotions, as well as obsessions and compulsions. Functional MRI is limited by the physics underlying the method, as well as by practical constraints regarding its use in conjunction with mentally ill patients. Within this framework, the problems of signal-to-noise ratio, data analysis strategies, motion correction, and neurovascular coupling are considered. Because of the rapid development of the field of fMRI, maps of higher cognitive functions and their respective pathology seem to be coming within easy reach.

[BibTex]

[BibTex]