Header logo is ei


2002


no image
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond

Schölkopf, B., Smola, A.

pages: 644, Adaptive Computation and Machine Learning, MIT Press, Cambridge, MA, USA, December 2002, Parts of this book, including an introduction to kernel methods, can be downloaded here. (book)

Abstract
In the 1990s, a new type of learning algorithm was developed, based on results from statistical learning theory: the Support Vector Machine (SVM). This gave rise to a new class of theoretically elegant learning machines that use a central concept of SVMs-kernels—for a number of learning tasks. Kernel machines provide a modular framework that can be adapted to different tasks and domains by the choice of the kernel function and the base algorithm. They are replacing neural networks in a variety of fields, including engineering, information retrieval, and bioinformatics. Learning with Kernels provides an introduction to SVMs and related kernel methods. Although the book begins with the basics, it also includes the latest research. It provides all of the concepts necessary to enable a reader equipped with some basic mathematical knowledge to enter the world of machine learning using theoretically well-founded yet easy-to-use kernel algorithms and to understand and apply the powerful algorithms that have been developed over the last few years.

Web [BibTex]

2002

Web [BibTex]


no image
Surface-slant-from-texture discrimination: Effects of slant level and texture type

Rosas, P., Wichmann, F., Wagemans, J.

Journal of Vision, 2(7):300, Second Annual Meeting of the Vision Sciences Society (VSS), November 2002 (poster)

Abstract
The problem of surface-slant-from-texture was studied psychophysically by measuring the performances of five human subjects in a slant-discrimination task with a number of different types of textures: uniform lattices, randomly displaced lattices, polka dots, Voronoi tessellations, orthogonal sinusoidal plaid patterns, fractal or 1/f noise, “coherent” noise and a “diffusion-based” texture (leopard skin-like). The results show: (1) Improving performance with larger slants for all textures. (2) A “non-symmetrical” performance around a particular slant characterized by a psychometric function that is steeper in the direction of the more slanted orientation. (3) For sufficiently large slants (66 deg) there are no major differences in performance between any of the different textures. (4) For slants at 26, 37 and 53 degrees, however, there are marked differences between the different textures. (5) The observed differences in performance across textures for slants up to 53 degrees are systematic within subjects, and nearly so across them. This allows a rank-order of textures to be formed according to their “helpfulness” — that is, how easy the discrimination task is when a particular texture is mapped on the surface. Polka dots tended to allow the best slant discrimination performance, noise patterns the worst up to the large slant of 66 degrees at which performance was almost independent of the particular texture chosen. Finally, our large number of 2AFC trials (approximately 2800 trials per texture across subjects) and associated tight confidence intervals may enable us to find out about which statistical properties of the textures could be responsible for surface-slant-from-texture estimation, with the ultimate goal of being able to predict observer performance for any arbitrary texture.

Web DOI [BibTex]

Web DOI [BibTex]


no image
Modelling Contrast Transfer in Spatial Vision

Wichmann, F.

Journal of Vision, 2(10):7, Second Annual Meeting of the Vision Sciences Society (VSS), November 2002 (poster)

Abstract
Much of our information about spatial vision comes from detection experiments involving low-contrast stimuli. Contrast discrimination experiments provide one way to explore the visual system's response to stimuli of higher contrast, the results of which allow different models of contrast processing (e.g. energy versus gain-control models) to be critically assessed (Wichmann & Henning, 1999). Studies of detection and discrimination using pulse train stimuli in noise, on the other hand, make predictions about the number, position and properties of noise sources within the processing stream (Henning, Bird & Wichmann, 2002). Here I report modelling results combining data from both sinusoidal and pulse train experiments in and without noise to arrive at a more tightly constrained model of early spatial vision.

Web DOI [BibTex]

Web DOI [BibTex]


no image
Gender Classification of Human Faces

Graf, A., Wichmann, F.

In Biologically Motivated Computer Vision, pages: 1-18, (Editors: Bülthoff, H. H., S.W. Lee, T. A. Poggio and C. Wallraven), Springer, Berlin, Germany, Second International Workshop on Biologically Motivated Computer Vision (BMCV), November 2002 (inproceedings)

Abstract
This paper addresses the issue of combining pre-processing methods—dimensionality reduction using Principal Component Analysis (PCA) and Locally Linear Embedding (LLE)—with Support Vector Machine (SVM) classification for a behaviorally important task in humans: gender classification. A processed version of the MPI head database is used as stimulus set. First, summary statistics of the head database are studied. Subsequently the optimal parameters for LLE and the SVM are sought heuristically. These values are then used to compare the original face database with its processed counterpart and to assess the behavior of a SVM with respect to changes in illumination and perspective of the face images. Overall, PCA was superior in classification performance and allowed linear separability.

PDF PDF DOI [BibTex]

PDF PDF DOI [BibTex]


no image
Insect-Inspired Estimation of Self-Motion

Franz, MO., Chahl, JS.

In Biologically Motivated Computer Vision, (2525):171-180, LNCS, (Editors: Bülthoff, H.H. , S.W. Lee, T.A. Poggio, C. Wallraven), Springer, Berlin, Germany, Second International Workshop on Biologically Motivated Computer Vision (BMCV), November 2002 (inproceedings)

Abstract
The tangential neurons in the fly brain are sensitive to the typical optic flow patterns generated during self-motion. In this study, we examine whether a simplified linear model of these neurons can be used to estimate self-motion from the optic flow. We present a theory for the construction of an optimal linear estimator incorporating prior knowledge about the environment. The optimal estimator is tested on a gantry carrying an omnidirectional vision sensor. The experiments show that the proposed approach leads to accurate and robust estimates of rotation rates, whereas translation estimates turn out to be less reliable.

PDF PDF DOI [BibTex]

PDF PDF DOI [BibTex]


no image
Pulse train detection and discrimination in pink noise

Henning, G., Wichmann, F., Bird, C.

Journal of Vision, 2(7):229, Second Annual Meeting of the Vision Sciences Society (VSS), November 2002 (poster)

Abstract
Much of our information about spatial vision comes from detection experiments involving low-contrast stimuli. Contrast discrimination experiments provide one way to explore the visual system's response to stimuli of higher contrast. We explored both detection and contrast discrimination performance with sinusoidal and "pulse-train" (or line) gratings. Both types of grating had a fundamental spatial frequency of 2.09-c/deg but the pulse-train, ideally, contains, in addition to its fundamental component, all the harmonics of the fundamental. Although the 2.09-c/deg pulse-train produced on the display was measured and shown to contain at least 8 harmonics at equal contrast, it was no more detectable than its most detectable component; no benefit from having additional information at the harmonics was measurable. The addition of broadband "pink" noise, designed to equalize the detectability of the components of the pulse train, made it about a factor of four more detectable than any of its components. However, in contrast-discrimination experiments, with an in-phase pedestal or masking grating of the same form and phase as the signal and 15% contrast, the noise did not improve the discrimination performance of the pulse train relative to that of its sinusoidal components. In contrast, a 2.09-c/deg "super train," constructed to have 8 equally detectable harmonics, was a factor of five more detectable than any of its components. We discuss the implications of these observations for models of early vision in particular the implications for possible sources of internal noise.

Web DOI [BibTex]

Web DOI [BibTex]


no image
Combining sensory Information to Improve Visualization

Ernst, M., Banks, M., Wichmann, F., Maloney, L., Bülthoff, H.

In Proceedings of the Conference on Visualization ‘02 (VIS ‘02), pages: 571-574, (Editors: Moorhead, R. , M. Joy), IEEE, Piscataway, NJ, USA, IEEE Conference on Visualization (VIS '02), October 2002 (inproceedings)

Abstract
Seemingly effortlessly the human brain reconstructs the three-dimensional environment surrounding us from the light pattern striking the eyes. This seems to be true across almost all viewing and lighting conditions. One important factor for this apparent easiness is the redundancy of information provided by the sensory organs. For example, perspective distortions, shading, motion parallax, or the disparity between the two eyes' images are all, at least partly, redundant signals which provide us with information about the three-dimensional layout of the visual scene. Our brain uses all these different sensory signals and combines the available information into a coherent percept. In displays visualizing data, however, the information is often highly reduced and abstracted, which may lead to an altered perception and therefore a misinterpretation of the visualized data. In this panel we will discuss mechanisms involved in the combination of sensory information and their implications for simulations using computer displays, as well as problems resulting from current display technology such as cathode-ray tubes.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Incorporating Invariances in Non-Linear Support Vector Machines

Chapelle, O., Schölkopf, B.

In Advances in Neural Information Processing Systems 14, pages: 609-616, (Editors: TG Dietterich and S Becker and Z Ghahramani), MIT Press, Cambridge, MA, USA, 15th Annual Neural Information Processing Systems Conference (NIPS), September 2002 (inproceedings)

Abstract
The choice of an SVM kernel corresponds to the choice of a representation of the data in a feature space and, to improve performance, it should therefore incorporate prior knowledge such as known transformation invariances. We propose a technique which extends earlier work and aims at incorporating invariances in nonlinear kernels. We show on a digit recognition task that the proposed approach is superior to the Virtual Support Vector method, which previously had been the method of choice.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Constructing Boosting algorithms from SVMs: an application to one-class classification.

Rätsch, G., Mika, S., Schölkopf, B., Müller, K.

IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(9):1184-1199, September 2002 (article)

Abstract
We show via an equivalence of mathematical programs that a support vector (SV) algorithm can be translated into an equivalent boosting-like algorithm and vice versa. We exemplify this translation procedure for a new algorithm—one-class leveraging—starting from the one-class support vector machine (1-SVM). This is a first step toward unsupervised learning in a boosting framework. Building on so-called barrier methods known from the theory of constrained optimization, it returns a function, written as a convex combination of base hypotheses, that characterizes whether a given test point is likely to have been generated from the distribution underlying the training data. Simulations on one-class classification problems demonstrate the usefulness of our approach.

DOI [BibTex]

DOI [BibTex]


no image
Kernel Dependency Estimation

Weston, J., Chapelle, O., Elisseeff, A., Schölkopf, B., Vapnik, V.

(98), Max Planck Institute for Biological Cybernetics, August 2002 (techreport)

Abstract
We consider the learning problem of finding a dependency between a general class of objects and another, possibly different, general class of objects. The objects can be for example: vectors, images, strings, trees or graphs. Such a task is made possible by employing similarity measures in both input and output spaces using kernel functions, thus embedding the objects into vector spaces. Output kernels also make it possible to encode prior information and/or invariances in the loss function in an elegant way. We experimentally validate our approach on several tasks: mapping strings to strings, pattern recognition, and reconstruction from partial images.

PDF [BibTex]

PDF [BibTex]


no image
Phase information in the recognition of natural images

Braun, D., Wichmann, F., Gegenfurtner, K.

Perception, 31(ECVP Abstract Supplement):133, 25th European Conference on Visual Perception, August 2002 (poster)

Abstract
Fourier phase plays an important role in determining global image structure. For example, when the phase spectrum of an image of a flower is swapped with that of a tank, we usually perceive a tank, even though the amplitude spectrum is still that of the flower. Similarly, when the phase spectrum of an image is randomly swapped across frequencies, that is its Fourier energy is randomly distributed over the image, the resulting image becomes impossible to recognise. Our goal was to evaluate the effect of phase manipulations in a quantitative manner. Subjects viewed two images of natural scenes, one of which contained an animal (the target) embedded in the background. The spectra of the images were manipulated by adding random phase noise at each frequency. The phase noise was the independent variable, uniformly distributed between 0° and ±180°. Subjects were remarkably resistant to phase noise. Even with ±120° noise, subjects were still 75% correct. The proportion of correct answers closely followed the correlation between original and noise-distorted images. Thus it appears as if it was not the global phase information per se that determines our percept of natural images, but rather the effect of phase on local image features.

Web [BibTex]

Web [BibTex]


no image
The contributions of color to recognition memory for natural scenes

Wichmann, F., Sharpe, L., Gegenfurtner, K.

Journal of Experimental Psychology: Learning, Memory and Cognition, 28(3):509-520, May 2002 (article)

Abstract
The authors used a recognition memory paradigm to assess the influence of color information on visual memory for images of natural scenes. Subjects performed 5-10% better for colored than for black-and-white images independent of exposure duration. Experiment 2 indicated little influence of contrast once the images were suprathreshold, and Experiment 3 revealed that performance worsened when images were presented in color and tested in black and white, or vice versa, leading to the conclusion that the surface property color is part of the memory representation. Experiments 4 and 5 exclude the possibility that the superior recognition memory for colored images results solely from attentional factors or saliency. Finally, the recognition memory advantage disappears for falsely colored images of natural scenes: The improvement in recognition memory depends on the color congruence of presented images with learned knowledge about the color gamut found within natural scenes. The results can be accounted for within a multiple memory systems framework.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Detection and discrimination in pink noise

Wichmann, F., Henning, G.

5, pages: 100, 5. T{\"u}binger Wahrnehmungskonferenz (TWK), February 2002 (poster)

Abstract
Much of our information about early spatial vision comes from detection experiments involving low-contrast stimuli, which are not, perhaps, particularly "natural" stimuli. Contrast discrimination experiments provide one way to explore the visual system's response to stimuli of higher contrast whilst keeping the number of unknown parameters comparatively small. We explored both detection and contrast discrimination performance with sinusoidal and "pulse-train" (or line) gratings. Both types of grating had a fundamental spatial frequency of 2.09-c/deg but the pulse-train, ideally, contains, in addition to its fundamental component, all the harmonics of the fundamental. Although the 2.09-c/deg pulse-train produced on our display was measured using a high-performance digital camera (Photometrics) and shown to contain at least 8 harmonics at equal contrast, it was no more detectable than its most detectable component; no benefit from having additional information at the harmonics was measurable. The addition of broadband 1-D "pink" noise made it about a factor of four more detectable than any of its components. However, in contrast-discrimination experiments, with an in-phase pedestal or masking grating of the same form and phase as the signal and 15% contrast, the noise did not improve the discrimination performance of the pulse train relative to that of its sinusoidal components. We discuss the implications of these observations for models of early vision in particular the implications for possible sources of internal noise.

Web [BibTex]

Web [BibTex]


no image
Training invariant support vector machines

DeCoste, D., Schölkopf, B.

Machine Learning, 46(1-3):161-190, January 2002 (article)

Abstract
Practical experience has shown that in order to obtain the best possible performance, prior knowledge about invariances of a classification problem at hand ought to be incorporated into the training procedure. We describe and review all known methods for doing so in support vector machines, provide experimental results, and discuss their respective merits. One of the significant new results reported in this work is our recent achievement of the lowest reported test error on the well-known MNIST digit recognition benchmark task, with SVM training times that are also significantly faster than previous SVM methods.

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Contrast discrimination with sinusoidal gratings of different spatial frequency

Bird, C., Henning, G., Wichmann, F.

Journal of the Optical Society of America A, 19(7), pages: 1267-1273, 2002 (article)

Abstract
The detectability of contrast increments was measured as a function of the contrast of a masking or “pedestal” grating at a number of different spatial frequencies ranging from 2 to 16 cycles per degree of visual angle. The pedestal grating always had the same orientation, spatial frequency and phase as the signal. The shape of the contrast increment threshold versus pedestal contrast (TvC) functions depend of the performance level used to define the “threshold,” but when both axes are normalized by the contrast corresponding to 75% correct detection at each frequency, the (TvC) functions at a given performance level are identical. Confidence intervals on the slope of the rising part of the TvC functions are so wide that it is not possible with our data to reject Weber’s Law.

PDF [BibTex]

PDF [BibTex]


no image
A compression approach to support vector model selection

von Luxburg, U., Bousquet, O., Schölkopf, B.

(101), Max Planck Institute for Biological Cybernetics, 2002, see more detailed JMLR version (techreport)

Abstract
In this paper we investigate connections between statistical learning theory and data compression on the basis of support vector machine (SVM) model selection. Inspired by several generalization bounds we construct ``compression coefficients'' for SVMs, which measure the amount by which the training labels can be compressed by some classification hypothesis. The main idea is to relate the coding precision of this hypothesis to the width of the margin of the SVM. The compression coefficients connect well known quantities such as the radius-margin ratio R^2/rho^2, the eigenvalues of the kernel matrix and the number of support vectors. To test whether they are useful in practice we ran model selection experiments on several real world datasets. As a result we found that compression coefficients can fairly accurately predict the parameters for which the test error is minimized.

[BibTex]

[BibTex]


no image
Application of Monte Carlo Methods to Psychometric Function Fitting

Wichmann, F.

Proceedings of the 33rd European Conference on Mathematical Psychology, pages: 44, 2002 (poster)

Abstract
The psychometric function relates an observer's performance to an independent variable, usually some physical quantity of a stimulus in a psychophysical task. Here I describe methods to (1) fitting psychometric functions, (2) assessing goodness-of-fit, and (3) providing confidence intervals for the function's parameters and other estimates derived from them. First I describe a constrained maximum-likelihood method for parameter estimation. Using Monte-Carlo simulations I demonstrate that it is important to have a fitting method that takes stimulus-independent errors (or "lapses") into account. Second, a number of goodness-of-fit tests are introduced. Because psychophysical data sets are usually rather small I advocate the use of Monte Carlo resampling techniques that do not rely on asymptotic theory for goodness-of-fit assessment. Third, a parametric bootstrap is employed to estimate the variability of fitted parameters and derived quantities such as thresholds and slopes. I describe how the bootstrap bridging assumption, on which the validity of the procedure depends, can be tested without incurring too high a cost in computation time. Finally I describe how the methods can be extended to test hypotheses concerning the form and shape of several psychometric functions. Software describing the methods is available (http://www.bootstrap-software.com/psignifit/), as well as articles describing the methods in detail (Wichmann&Hill, Perception&Psychophysics, 2001a,b).

[BibTex]

[BibTex]


no image
Contrast discrimination with pulse-trains in pink noise

Henning, G., Bird, C., Wichmann, F.

Journal of the Optical Society of America A, 19(7), pages: 1259-1266, 2002 (article)

Abstract
Detection performance was measured with sinusoidal and pulse-train gratings. Although the 2.09-c/deg pulse-train, or line gratings, contained at least 8 harmonics all at equal contrast, they were no more detectable than their most detectable component. The addition of broadband pink noise designed to equalize the detectability of the components of the pulse train made the pulse train about a factor of four more detectable than any of its components. However, in contrast-discrimination experiments, with a pedestal or masking grating of the same form and phase as the signal and 15% contrast, the noise did not affect the discrimination performance of the pulse train relative to that obtained with its sinusoidal components. We discuss the implications of these observations for models of early vision in particular the implications for possible sources of internal noise.

PDF [BibTex]

PDF [BibTex]


no image
A kernel approach for learning from almost orthogonal patterns

Schölkopf, B., Weston, J., Eskin, E., Leslie, C., Noble, W.

In Principles of Data Mining and Knowledge Discovery, Lecture Notes in Computer Science, 2430/2431, pages: 511-528, Lecture Notes in Computer Science, (Editors: T Elomaa and H Mannila and H Toivonen), Springer, Berlin, Germany, 13th European Conference on Machine Learning (ECML) and 6th European Conference on Principles and Practice of Knowledge Discovery in Databases (PKDD'2002), 2002 (inproceedings)

PostScript DOI [BibTex]

PostScript DOI [BibTex]


no image
Optimal linear estimation of self-motion - a real-world test of a model of fly tangential neurons

Franz, MO.

SAB 02 Workshop, Robotics as theoretical biology, 7th meeting of the International Society for Simulation of Adaptive Behaviour (SAB), (Editors: Prescott, T.; Webb, B.), 2002 (poster)

Abstract
The tangential neurons in the fly brain are sensitive to the typical optic flow patterns generated during self-motion (see example in Fig.1). We examine whether a simplified linear model of these neurons can be used to estimate self-motion from the optic flow. We present a theory for the construction of an optimal linear estimator incorporating prior knowledge both about the distance distribution of the environment, and about the noise and self-motion statistics of the sensor. The optimal estimator is tested on a gantry carrying an omnidirectional vision sensor that can be moved along three translational and one rotational degree of freedom. The experiments indicate that the proposed approach yields accurate results for rotation estimates, independently of the current translation and scene layout. Translation estimates, however, turned out to be sensitive to simultaneous rotation and to the particular distance distribution of the scene. The gantry experiments confirm that the receptive field organization of the tangential neurons allows them, as an ensemble, to extract self-motion from the optic flow.

PDF [BibTex]

PDF [BibTex]


no image
Luminance Artifacts on CRT Displays

Wichmann, F.

In IEEE Visualization, pages: 571-574, (Editors: Moorhead, R.; Gross, M.; Joy, K. I.), IEEE Visualization, 2002 (inproceedings)

Abstract
Most visualization panels today are still built around cathode-ray tubes (CRTs), certainly on personal desktops at work and at home. Whilst capable of producing pleasing images for common applications ranging from email writing to TV and DVD presentation, it is as well to note that there are a number of nonlinear transformations between input (voltage) and output (luminance) which distort the digital and/or analogue images send to a CRT. Some of them are input-independent and hence easy to fix, e.g. gamma correction, but others, such as pixel interactions, depend on the content of the input stimulus and are thus harder to compensate for. CRT-induced image distortions cause problems not only in basic vision research but also for applications where image fidelity is critical, most notably in medicine (digitization of X-ray images for diagnostic purposes) and in forms of online commerce, such as the online sale of images, where the image must be reproduced on some output device which will not have the same transfer function as the customer's CRT. I will present measurements from a number of CRTs and illustrate how some of their shortcomings may be problematic for the aforementioned applications.

[BibTex]

[BibTex]

2001


no image
Regularized principal manifolds

Smola, A., Mika, S., Schölkopf, B., Williamson, R.

Journal of Machine Learning Research, 1, pages: 179-209, June 2001 (article)

Abstract
Many settings of unsupervised learning can be viewed as quantization problems - the minimization of the expected quantization error subject to some restrictions. This allows the use of tools such as regularization from the theory of (supervised) risk minimization for unsupervised learning. This setting turns out to be closely related to principal curves, the generative topographic map, and robust coding. We explore this connection in two ways: (1) we propose an algorithm for finding principal manifolds that can be regularized in a variety of ways; and (2) we derive uniform convergence bounds and hence bounds on the learning rates of the algorithm. In particular, we give bounds on the covering numbers which allows us to obtain nearly optimal learning rates for certain types of regularization operators. Experimental results demonstrate the feasibility of the approach.

PDF [BibTex]

2001

PDF [BibTex]


no image
Variationsverfahren zur Untersuchung von Grundzustandseigenschaften des Ein-Band Hubbard-Modells

Eichhorn, J.

Biologische Kybernetik, Technische Universität Dresden, Dresden/Germany, May 2001 (diplomathesis)

Abstract
Using different modifications of a new variational approach, statical groundstate properties of the one-band Hubbard model such as energy and staggered magnetisation are calculated. By taking into account additional fluctuations, the method ist gradually improved so that a very good description of the energy in one and two dimensions can be achieved. After a detailed discussion of the application in one dimension, extensions for two dimensions are introduced. By use of a modified version of the variational ansatz in particular a description of the quantum phase transition for the magnetisation should be possible.

PostScript [BibTex]

PostScript [BibTex]


no image
The psychometric function: II. Bootstrap-based confidence intervals and sampling

Wichmann, F., Hill, N.

Perception and Psychophysics, 63 (8), pages: 1314-1329, 2001 (article)

PDF [BibTex]

PDF [BibTex]


no image
The psychometric function: I. Fitting, sampling and goodness-of-fit

Wichmann, F., Hill, N.

Perception and Psychophysics, 63 (8), pages: 1293-1313, 2001 (article)

Abstract
The psychometric function relates an observer'sperformance to an independent variable, usually some physical quantity of a stimulus in a psychophysical task. This paper, together with its companion paper (Wichmann & Hill, 2001), describes an integrated approach to (1) fitting psychometric functions, (2) assessing the goodness of fit, and (3) providing confidence intervals for the function'sparameters and other estimates derived from them, for the purposes of hypothesis testing. The present paper deals with the first two topics, describing a constrained maximum-likelihood method of parameter estimation and developing several goodness-of-fit tests. Using Monte Carlo simulations, we deal with two specific difficulties that arise when fitting functions to psychophysical data. First, we note that human observers are prone to stimulus-independent errors (or lapses ). We show that failure to account for this can lead to serious biases in estimates of the psychometric function'sparameters and illustrate how the problem may be overcome. Second, we note that psychophysical data sets are usually rather small by the standards required by most of the commonly applied statistical tests. We demonstrate the potential errors of applying traditional X^2 methods to psychophysical data and advocate use of Monte Carlo resampling techniques that do not rely on asymptotic theory. We have made available the software to implement our methods

PDF [BibTex]

PDF [BibTex]


no image
Unsupervised Segmentation and Classification of Mixtures of Markovian Sources

Seldin, Y., Bejerano, G., Tishby, N.

In The 33rd Symposium on the Interface of Computing Science and Statistics (Interface 2001 - Frontiers in Data Mining and Bioinformatics), pages: 1-15, 33rd Symposium on the Interface of Computing Science and Statistics (Interface - Frontiers in Data Mining and Bioinformatics), 2001 (inproceedings)

Abstract
We describe a novel algorithm for unsupervised segmentation of sequences into alternating Variable Memory Markov sources, first presented in [SBT01]. The algorithm is based on competitive learning between Markov models, when implemented as Prediction Suffix Trees [RST96] using the MDL principle. By applying a model clustering procedure, based on rate distortion theory combined with deterministic annealing, we obtain a hierarchical segmentation of sequences between alternating Markov sources. The method is applied successfully to unsupervised segmentation of multilingual texts into languages where it is able to infer correctly both the number of languages and the language switching points. When applied to protein sequence families (results of the [BSMT01] work), we demonstrate the method‘s ability to identify biologically meaningful sub-sequences within the proteins, which correspond to signatures of important functional sub-units called domains. Our approach to proteins classification (through the obtained signatures) is shown to have both conceptual and practical advantages over the currently used methods.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Markovian domain fingerprinting: statistical segmentation of protein sequences

Bejerano, G., Seldin, Y., Margalit, H., Tishby, N.

Bioinformatics, 17(10):927-934, 2001 (article)

PDF Web [BibTex]

PDF Web [BibTex]


no image
Unsupervised Sequence Segmentation by a Mixture of Switching Variable Memory Markov Sources

Seldin, Y., Bejerano, G., Tishby, N.

In In the proceeding of the 18th International Conference on Machine Learning (ICML 2001), pages: 513-520, 18th International Conference on Machine Learning (ICML), 2001 (inproceedings)

Abstract
We present a novel information theoretic algorithm for unsupervised segmentation of sequences into alternating Variable Memory Markov sources. The algorithm is based on competitive learning between Markov models, when implemented as Prediction Suffix Trees (Ron et al., 1996) using the MDL principle. By applying a model clustering procedure, based on rate distortion theory combined with deterministic annealing, we obtain a hierarchical segmentation of sequences between alternating Markov sources. The algorithm seems to be self regulated and automatically avoids over segmentation. The method is applied successfully to unsupervised segmentation of multilingual texts into languages where it is able to infer correctly both the number of languages and the language switching points. When applied to protein sequence families, we demonstrate the method‘s ability to identify biologically meaningful sub-sequences within the proteins, which correspond to important functional sub-units called domains.

PDF [BibTex]

PDF [BibTex]


no image
Inference Principles and Model Selection

Buhmann, J., Schölkopf, B.

(01301), Dagstuhl Seminar, 2001 (techreport)

Web [BibTex]

Web [BibTex]

1998


no image
SVMs — a practical consequence of learning theory

Schölkopf, B.

IEEE Intelligent Systems and their Applications, 13(4):18-21, July 1998 (article)

Abstract
My first exposure to Support Vector Machines came this spring when heard Sue Dumais present impressive results on text categorization using this analysis technique. This issue's collection of essays should help familiarize our readers with this interesting new racehorse in the Machine Learning stable. Bernhard Scholkopf, in an introductory overview, points out that a particular advantage of SVMs over other learning algorithms is that it can be analyzed theoretically using concepts from computational learning theory, and at the same time can achieve good performance when applied to real problems. Examples of these real-world applications are provided by Sue Dumais, who describes the aforementioned text-categorization problem, yielding the best results to date on the Reuters collection, and Edgar Osuna, who presents strong results on application to face detection. Our fourth author, John Platt, gives us a practical guide and a new technique for implementing the algorithm efficiently.

PDF Web DOI [BibTex]

1998

PDF Web DOI [BibTex]


no image
Prior knowledge in support vector kernels

Schölkopf, B., Simard, P., Smola, A., Vapnik, V.

In Advances in Neural Information Processing Systems 10, pages: 640-646 , (Editors: M Jordan and M Kearns and S Solla ), MIT Press, Cambridge, MA, USA, Eleventh Annual Conference on Neural Information Processing (NIPS), June 1998 (inproceedings)

PDF Web [BibTex]

PDF Web [BibTex]


no image
From regularization operators to support vector kernels

Smola, A., Schölkopf, B.

In Advances in Neural Information Processing Systems 10, pages: 343-349, (Editors: M Jordan and M Kearns and S Solla), MIT Press, Cambridge, MA, USA, 11th Annual Conference on Neural Information Processing (NIPS), June 1998 (inproceedings)

PDF Web [BibTex]

PDF Web [BibTex]


no image
Learning view graphs for robot navigation

Franz, M., Schölkopf, B., Mallot, H., Bülthoff, H.

Autonomous Robots, 5(1):111-125, March 1998 (article)

Abstract
We present a purely vision-based scheme for learning a topological representation of an open environment. The system represents selected places by local views of the surrounding scene, and finds traversable paths between them. The set of recorded views and their connections are combined into a graph model of the environment. To navigate between views connected in the graph, we employ a homing strategy inspired by findings of insect ethology. In robot experiments, we demonstrate that complex visual exploration and navigation tasks can thus be performed without using metric information.

PDF PDF DOI [BibTex]

PDF PDF DOI [BibTex]