63 results (BibTeX)

2001


The psychometric function: II. Bootstrap-based confidence intervals and sampling

Wichmann, F., Hill, N.

Perception and Psychophysics, 63 (8), pages: 1314-1329, 2001 (article)

PDF [BibTex]

2001

PDF [BibTex]


Variationsverfahren zur Untersuchung von Grundzustandseigenschaften des Ein-Band Hubbard-Modells

Eichhorn, J.

Biologische Kybernetik, Technische Universität Dresden, Dresden/Germany, May 2001 (diplomathesis)

Abstract
Using different modifications of a new variational approach, statical groundstate properties of the one-band Hubbard model such as energy and staggered magnetisation are calculated. By taking into account additional fluctuations, the method ist gradually improved so that a very good description of the energy in one and two dimensions can be achieved. After a detailed discussion of the application in one dimension, extensions for two dimensions are introduced. By use of a modified version of the variational ansatz in particular a description of the quantum phase transition for the magnetisation should be possible.

PostScript [BibTex]

PostScript [BibTex]


Nonstationary Signal Classification using Support Vector Machines

Gretton, A., Davy, M., Doucet, A., Rayner, P.

In 11th IEEE Workshop on Statistical Signal Processing, pages: 305-305, 11th IEEE Workshop on Statistical Signal Processing, 2001 (inproceedings)

Abstract
In this paper, we demonstrate the use of support vector (SV) techniques for the binary classification of nonstationary sinusoidal signals with quadratic phase. We briefly describe the theory underpinning SV classification, and introduce the Cohen's group time-frequency representation, which is used to process the non-stationary signals so as to define the classifier input space. We show that the SV classifier outperforms alternative classification methods on this processed data.

PostScript [BibTex]

PostScript [BibTex]


Enhanced User Authentication through Typing Biometrics with Artificial Neural Networks and K-Nearest Neighbor Algorithm

Wong, FWMH. Supian, ASM. Ismail, AF. Lai, WK. Ong, CS.

In 2001 (inproceedings)

[BibTex]


Predicting the Nonlinear Dynamics of Biological Neurons using Support Vector Machines with Different Kernels

Frontzek, T. Lal, TN. Eckmiller, R.

In Proceedings of the International Joint Conference on Neural Networks (IJCNN'2001) Washington DC, 2, pages: 1492-1497, Proceedings of the International Joint Conference on Neural Networks (IJCNN'2001) Washington DC, 2001 (inproceedings)

Abstract
Based on biological data we examine the ability of Support Vector Machines (SVMs) with gaussian, polynomial and tanh-kernels to learn and predict the nonlinear dynamics of single biological neurons. We show that SVMs for regression learn the dynamics of the pyloric dilator neuron of the australian crayfish, and we determine the optimal SVM parameters with regard to the test error. Compared to conventional RBF networks and MLPs, SVMs with gaussian kernels learned faster and performed a better iterated one-step-ahead prediction with regard to training and test error. From a biological point of view SVMs are especially better in predicting the most important part of the dynamics, where the membranpotential is driven by superimposed synaptic inputs to the threshold for the oscillatory peak.

PDF [BibTex]

PDF [BibTex]


Support vector novelty detection applied to jet engine vibration spectra

Hayton, P., Schölkopf, B., Tarassenko, L., Anuzis, P.

In Advances in Neural Information Processing Systems 13, pages: 946-952, (Editors: TK Leen and TG Dietterich and V Tresp), MIT Press, Cambridge, MA, USA, 14th Annual Neural Information Processing Systems Conference (NIPS), April 2001 (inproceedings)

Abstract
A system has been developed to extract diagnostic information from jet engine carcass vibration data. Support Vector Machines applied to novelty detection provide a measure of how unusual the shape of a vibration signature is, by learning a representation of normality. We describe a novel method for Support Vector Machines of including information from a second class for novelty detection and give results from the application to Jet Engine vibration analysis.

PDF Web [BibTex]

PDF Web [BibTex]


Computationally Efficient Face Detection

Romdhani, S., Torr, P., Schölkopf, B., Blake, A.

In Computer Vision, ICCV 2001, vol. 2, (73):695-700, IEEE, 8th International Conference on Computer Vision, 2001 (inproceedings)

DOI [BibTex]

DOI [BibTex]


Pattern Selection Using the Bias and Variance of Ensemble

Shin, H., Cho, S.

In Proc. of the Korean Data Mining Conference, pages: 56-67, Korean Data Mining Conference, December 2001 (inproceedings)

[BibTex]

[BibTex]


Four-legged Walking Gait Control Using a Neuromorphic Chip Interfaced to a Support Vector Learning Algorithm

Still, S., Schölkopf, B., Hepp, K., Douglas, R.

In Advances in Neural Information Processing Systems 13, pages: 741-747, (Editors: TK Leen and TG Dietterich and V Tresp), MIT Press, Cambridge, MA, USA, 14th Annual Neural Information Processing Systems Conference (NIPS), April 2001 (inproceedings)

Abstract
To control the walking gaits of a four-legged robot we present a novel neuromorphic VLSI chip that coordinates the relative phasing of the robot's legs similar to how spinal Central Pattern Generators are believed to control vertebrate locomotion [3]. The chip controls the leg movements by driving motors with time varying voltages which are the outputs of a small network of coupled oscillators. The characteristics of the chip's output voltages depend on a set of input parameters. The relationship between input parameters and output voltages can be computed analytically for an idealized system. In practice, however, this ideal relationship is only approximately true due to transistor mismatch and offsets.

PDF Web [BibTex]

PDF Web [BibTex]


Design and Verification of Supervisory Controller of High-Speed Train

Yoo, SP. Lee, DY. Son, HI.

In IEEE International Symposium on Industrial Electronics, pages: 1290-1295, IEEE Operations Center, Piscataway, NJ, USA, IEEE International Symposium on Industrial Electronics (ISIE), 2001 (inproceedings)

Abstract
A high-level controller, supervisory controller, is required to monitor, control, and diagnose the low-level controllers of the high-speed train. The supervisory controller controls low-level controllers by monitoring input and output signals, events, and the high-speed train can be modeled as a discrete event system (DES). The high-speed train is modeled with automata, and the high-level control specification is defined. The supervisory controller is designed using the high-speed train model and the control specification. The designed supervisory controller is verified and evaluated with simulation using a computer-aided software engineering (CASE) tool, Object GEODE

Web DOI [BibTex]

Web DOI [BibTex]


Kernel Methods for Extracting Local Image Semantics

Bradshaw, B., Schölkopf, B., Platt, J.

(MSR-TR-2001-99), Microsoft Research, October 2001 (techreport)

Web [BibTex]

Web [BibTex]


Plaid maskers revisited: asymmetric plaids

Wichmann, F.

pages: 57, 4. T{\"u}binger Wahrnehmungskonferenz (TWK), March 2001 (poster)

Abstract
A large number of psychophysical and physiological experiments suggest that luminance patterns are independently analysed in channels responding to different bands of spatial frequency. There are, however, interactions among stimuli falling well outside the usual estimates of channels' bandwidths. Derrington & Henning (1989) first reported that, in 2-AFC sinusoidal-grating detection, plaid maskers, whose components are oriented symmetrically about the signal orientation, cause a substantially larger threshold elevation than would be predicted from their sinusoidal constituents alone. Wichmann & Tollin (1997a,b) and Wichmann & Henning (1998) confirmed and extended the original findings, measuring masking as a function of presentation time and plaid mask contrast. Here I investigate masking using plaid patterns whose components are asymmetrically positioned about the signal orientation. Standard temporal 2-AFC pattern discrimination experiments were conducted using plaid patterns and oblique sinusoidal gratings as maskers, and horizontally orientated sinusoidal gratings as signals. Signal and maskers were always interleaved on the display (refresh rate 152 Hz). As in the case of the symmetrical plaid maskers, substantial masking was observed for many of the asymmetrical plaids. Masking is neither a straightforward function of the plaid's constituent sinusoidal components nor of the periodicity of the luminance beats between components. These results cause problems for the notion that, even for simple stimuli, detection and discrimination are based on the outputs of channels tuned to limited ranges of spatial frequency and orientation, even if a limited set of nonlinear interactions between these channels is allowed.

Web [BibTex]

Web [BibTex]


Cerebellar Control of Robot Arms

Peters, J.

Biologische Kybernetik, Technische Univeristät München, München, Germany, 2001 (diplomathesis)

[BibTex]

[BibTex]


The psychometric function: I. Fitting, sampling and goodness-of-fit

Wichmann, F., Hill, N.

Perception and Psychophysics, 63 (8), pages: 1293-1313, 2001 (article)

Abstract
The psychometric function relates an observer'sperformance to an independent variable, usually some physical quantity of a stimulus in a psychophysical task. This paper, together with its companion paper (Wichmann & Hill, 2001), describes an integrated approach to (1) fitting psychometric functions, (2) assessing the goodness of fit, and (3) providing confidence intervals for the function'sparameters and other estimates derived from them, for the purposes of hypothesis testing. The present paper deals with the first two topics, describing a constrained maximum-likelihood method of parameter estimation and developing several goodness-of-fit tests. Using Monte Carlo simulations, we deal with two specific difficulties that arise when fitting functions to psychophysical data. First, we note that human observers are prone to stimulus-independent errors (or lapses ). We show that failure to account for this can lead to serious biases in estimates of the psychometric function'sparameters and illustrate how the problem may be overcome. Second, we note that psychophysical data sets are usually rather small by the standards required by most of the commonly applied statistical tests. We demonstrate the potential errors of applying traditional X^2 methods to psychophysical data and advocate use of Monte Carlo resampling techniques that do not rely on asymptotic theory. We have made available the software to implement our methods

PDF [BibTex]

PDF [BibTex]


On Unsupervised Learning of Mixtures of Markov Sources

Seldin, Y.

Biologische Kybernetik, The Hebrew University of Jerusalem, Israel, 2001 (diplomathesis)

PDF [BibTex]

PDF [BibTex]


Algorithmic Stability and Generalization Performance

Bousquet, O., Elisseeff, A.

In Advances in Neural Information Processing Systems 13, pages: 196-202, (Editors: Leen, T.K. , T.G. Dietterich, V. Tresp), MIT Press, Cambridge, MA, USA, Fourteenth Annual Neural Information Processing Systems Conference (NIPS), April 2001 (inproceedings)

Abstract
We present a novel way of obtaining PAC-style bounds on the generalization error of learning algorithms, explicitly using their stability properties. A {\em stable} learner being one for which the learned solution does not change much for small changes in the training set. The bounds we obtain do not depend on any measure of the complexity of the hypothesis space (e.g. VC dimension) but rather depend on how the learning algorithm searches this space, and can thus be applied even when the VC dimension in infinite. We demonstrate that regularization networks possess the required stability property and apply our method to obtain new bounds on their generalization performance.

PDF Web [BibTex]

PDF Web [BibTex]


Towards Learning Path Planning for Solving Complex Robot Tasks

Frontzek, T. Lal, TN. Eckmiller, R.

In Proceedings of the International Conference on Artificial Neural Networks (ICANN'2001) Vienna, pages: 943-950, Proceedings of the International Conference on Artificial Neural Networks (ICANN'2001) Vienna, 2001 (inproceedings)

Abstract
For solving complex robot tasks it is necessary to incorporate path planning methods that are able to operate within different high-dimensional configuration spaces containing an unknown number of obstacles. Based on Advanced A*-algorithm (AA*) using expansion matrices instead of a simple expansion logic we propose a further improvement of AA* enabling the capability to learn directly from sample planning tasks. This is done by inserting weights into the expansion matrix which are modified according to a special learning rule. For an examplary planning task we show that Adaptive AA* learns movement vectors which allow larger movements than the initial ones into well-defined directions of the configuration space. Compared to standard approaches planning times are clearly reduced.

PDF [BibTex]

PDF [BibTex]


Learning to predict the leave-one-out error of kernel based classifiers

Tsuda, K., Rätsch, G., Mika, S., Müller, K.

In International Conference on Artificial Neural Networks, ICANN'01, (LNCS 2130):331-338, (Editors: G. Dorffner, H. Bischof and K. Hornik), International Conference on Artificial Neural Networks, ICANN'01, 2001 (inproceedings)

PDF [BibTex]

PDF [BibTex]


The Kernel Trick for Distances

Schölkopf, B.

In Advances in Neural Information Processing Systems 13, pages: 301-307, (Editors: TK Leen and TG Dietterich and V Tresp), MIT Press, Cambridge, MA, USA, 14th Annual Neural Information Processing Systems Conference (NIPS), April 2001 (inproceedings)

Abstract
A method is described which, like the kernel trick in support vector machines (SVMs), lets us generalize distance-based algorithms to operate in feature spaces, usually nonlinearly related to the input space. This is done by identifying a class of kernels which can be represented as norm-based distances in Hilbert spaces. It turns out that the common kernel algorithms, such as SVMs and kernel PCA, are actually really distance based algorithms and can be run with that class of kernels, too. As well as providing a useful new insight into how these algorithms work, the present work can form the basis for conceiving new algorithms.

PDF Web [BibTex]

PDF Web [BibTex]


A kernel approach for vector quantization with guaranteed distortion bounds

Tipping, M., Schölkopf, B.

In Artificial Intelligence and Statistics, pages: 129-134, (Editors: T Jaakkola and T Richardson), Morgan Kaufmann, San Francisco, CA, USA, 8th International Conference on Artificial Intelligence and Statistics (AI and STATISTICS), 2001 (inproceedings)

[BibTex]

[BibTex]


Separation of post-nonlinear mixtures using ACE and temporal decorrelation

Ziehe, A., Kawanabe, M., Harmeling, S., Müller, K.

In ICA 2001, pages: 433-438, (Editors: Lee, T.-W. , T.P. Jung, S. Makeig, T. J. Sejnowski), Third International Workshop on Independent Component Analysis and Blind Signal Separation, December 2001 (inproceedings)

Abstract
We propose an efficient method based on the concept of maximal correlation that reduces the post-nonlinear blind source separation problem (PNL BSS) to a linear BSS problem. For this we apply the Alternating Conditional Expectation (ACE) algorithm – a powerful technique from nonparametric statistics – to approximately invert the (post-)nonlinear functions. Interestingly, in the framework of the ACE method convergence can be proven and in the PNL BSS scenario the optimal transformation found by ACE will coincide with the desired inverse functions. After the nonlinearities have been removed by ACE, temporal decorrelation (TD) allows us to recover the source signals. An excellent performance underlines the validity of our approach and demonstrates the ACE-TD method on realistic examples.

PDF [BibTex]

PDF [BibTex]


Incorporating Invariances in Non-Linear Support Vector Machines

Chapelle, O., Schölkopf, B.

Max Planck Institute for Biological Cybernetics / Biowulf Technologies, 2001 (techreport)

Abstract
We consider the problem of how to incorporate in the Support Vector Machine (SVM) framework invariances given by some a priori known transformations under which the data should be invariant. It extends some previous work which was only applicable with linear SVMs and we show on a digit recognition task that the proposed approach is superior to the traditional Virtual Support Vector method.

PostScript [BibTex]

PostScript [BibTex]


Perception of Planar Shapes in Depth

Wichmann, F., Willems, B., Rosas, P., Wagemans, J.

Journal of Vision, 1(3):176, First Annual Meeting of the Vision Sciences Society (VSS), December 2001 (poster)

Abstract
We investigated the influence of the perceived 3D-orientation of planar elliptical shapes on the perception of the shapes themselves. Ellipses were projected onto the surface of a sphere and subjects were asked to indicate if the projected shapes looked as if they were a circle on the surface of the sphere. The image of the sphere was obtained from a real, (near) perfect sphere using a highly accurate digital camera (real sphere diameter 40 cm; camera-to-sphere distance 320 cm; for details see Willems et al., Perception 29, S96, 2000; Photometrics SenSys 400 digital camera with Rodenstock lens, 12-bit linear luminance resolution). Stimuli were presented monocularly on a carefully linearized Sony GDM-F500 monitor keeping the scene geometry as in the real case (sphere diameter on screen 8.2 cm; viewing distance 66 cm). Experiments were run in a darkened room using a viewing tube to minimize, as far as possible, extraneous monocular cues to depth. Three different methods were used to obtain subjects' estimates of 3D-shape: the method of adjustment, temporal 2-alternative forced choice (2AFC) and yes/no. Several results are noteworthy. First, mismatch between perceived and objective slant tended to decrease with increasing objective slant. Second, the variability of the settings, too, decreased with increasing objective slant. Finally, we comment on the results obtained using different psychophysical methods and compare our results to those obtained using a real sphere and binocular vision (Willems et al.).

Web DOI [BibTex]

Web DOI [BibTex]


Unsupervised Segmentation and Classification of Mixtures of Markovian Sources

Seldin, Y., Bejerano, G., Tishby, N.

In The 33rd Symposium on the Interface of Computing Science and Statistics (Interface 2001 - Frontiers in Data Mining and Bioinformatics), pages: 1-15, 33rd Symposium on the Interface of Computing Science and Statistics (Interface - Frontiers in Data Mining and Bioinformatics), 2001 (inproceedings)

Abstract
We describe a novel algorithm for unsupervised segmentation of sequences into alternating Variable Memory Markov sources, first presented in [SBT01]. The algorithm is based on competitive learning between Markov models, when implemented as Prediction Suffix Trees [RST96] using the MDL principle. By applying a model clustering procedure, based on rate distortion theory combined with deterministic annealing, we obtain a hierarchical segmentation of sequences between alternating Markov sources. The method is applied successfully to unsupervised segmentation of multilingual texts into languages where it is able to infer correctly both the number of languages and the language switching points. When applied to protein sequence families (results of the [BSMT01] work), we demonstrate the method‘s ability to identify biologically meaningful sub-sequences within the proteins, which correspond to signatures of important functional sub-units called domains. Our approach to proteins classification (through the obtained signatures) is shown to have both conceptual and practical advantages over the currently used methods.

PDF Web [BibTex]

PDF Web [BibTex]


Vicinal Risk Minimization

Chapelle, O., Weston, J., Bottou, L., Vapnik, V.

In Advances in Neural Information Processing Systems 13, pages: 416-422, (Editors: Leen, T.K. , T.G. Dietterich, V. Tresp), MIT Press, Cambridge, MA, USA, Fourteenth Annual Neural Information Processing Systems Conference (NIPS) , April 2001 (inproceedings)

Abstract
The Vicinal Risk Minimization principle establishes a bridge between generative models and methods derived from the Structural Risk Minimization Principle such as Support Vector Machines or Statistical Regularization. We explain how VRM provides a framework which integrates a number of existing algorithms, such as Parzen windows, Support Vector Machines, Ridge Regression, Constrained Logistic Classifiers and Tangent-Prop. We then show how the approach implies new algorithms for solving problems usually associated with generative models. New algorithms are described for dealing with pattern recognition problems with very different pattern distributions and dealing with unlabeled data. Preliminary empirical results are presented.

PDF Web [BibTex]

PDF Web [BibTex]


Generalization performance of regularization networks and support vector machines via entropy numbers of compact operators

Williamson, R., Smola, A., Schölkopf, B.

IEEE Transactions on Information Theory, 47(6):2516-2532, September 2001 (article)

Abstract
We derive new bounds for the generalization error of kernel machines, such as support vector machines and related regularization networks by obtaining new bounds on their covering numbers. The proofs make use of a viewpoint that is apparently novel in the field of statistical learning theory. The hypothesis class is described in terms of a linear operator mapping from a possibly infinite-dimensional unit ball in feature space into a finite-dimensional space. The covering numbers of the class are then determined via the entropy numbers of the operator. These numbers, which characterize the degree of compactness of the operator can be bounded in terms of the eigenvalues of an integral operator induced by the kernel function used by the machine. As a consequence, we are able to theoretically explain the effect of the choice of kernel function on the generalization performance of support vector machines.

DOI [BibTex]

DOI [BibTex]


Tracking a Small Set of Experts by Mixing Past Posteriors

Bousquet, O., Warmuth, M.

In Proceedings of the 14th Annual Conference on Computational Learning Theory, Lecture Notes in Computer Science, 2111, pages: 31-47, Proceedings of the 14th Annual Conference on Computational Learning Theory, Lecture Notes in Computer Science, 2001 (inproceedings)

Abstract
In this paper, we examine on-line learning problems in which the target concept is allowed to change over time. In each trial a master algorithm receives predictions from a large set of $n$ experts. Its goal is to predict almost as well as the best sequence of such experts chosen off-line by partitioning the training sequence into $k+1$ sections and then choosing the best expert for each section. We build on methods developed by Herbster and Warmuth and consider an open problem posed by Freund where the experts in the best partition are from a small pool of size $m$. Since $k>>m$ the best expert shifts back and forth between the experts of the small pool. We propose algorithms that solve this open problem by mixing the past posteriors maintained by the master algorithm. We relate the number of bits needed for encoding the best partition to the loss bounds of the algorithms. Instead of paying $\log n$ for choosing the best expert in each section we first pay $\log {n\choose m}$ bits in the bounds for identifying the pool of $m$ experts and then $\log m$ bits per new section. In the bounds we also pay twice for encoding the boundaries of the sections.

PDF PostScript [BibTex]

PDF PostScript [BibTex]


Learning and Prediction of the Nonlinear Dynamics of Biological Neurons with Support Vector Machines

Frontzek, T. Lal, TN. Eckmiller, R.

In Proceedings of the International Conference on Artificial Neural Networks (ICANN'2001), pages: 390-398, Proceedings of the International Conference on Artificial Neural Networks (ICANN'2001), 2001 (inproceedings)

Abstract
Based on biological data we examine the ability of Support Vector Machines (SVMs) with gaussian kernels to learn and predict the nonlinear dynamics of single biological neurons. We show that SVMs for regression learn the dynamics of the pyloric dilator neuron of the australian crayfish, and we determine the optimal SVM parameters with regard to the test error. Compared to conventional RBF networks, SVMs learned faster and performed a better iterated one-step-ahead prediction with regard to training and test error. From a biological point of view SVMs are especially better in predicting the most important part of the dynamics, where the membranpotential is driven by superimposed synaptic inputs to the threshold for the oscillatory peak.

PDF [BibTex]

PDF [BibTex]


Estimating a Kernel Fisher Discriminant in the Presence of Label Noise

Lawrence, N., Schölkopf, B.

In 18th International Conference on Machine Learning, pages: 306-313, (Editors: CE Brodley and A Pohoreckyj Danyluk), Morgan Kaufmann , San Fransisco, CA, USA, 18th International Conference on Machine Learning (ICML), 2001 (inproceedings)

Web [BibTex]

Web [BibTex]


A Generalized Representer Theorem

Schölkopf, B., Herbrich, R., Smola, A.

In Lecture Notes in Computer Science, Vol. 2111, (2111):416-426, LNCS, (Editors: D Helmbold and R Williamson), Springer, Berlin, Germany, Annual Conference on Computational Learning Theory (COLT/EuroCOLT), 2001 (inproceedings)

[BibTex]

[BibTex]


Combining Off- and On-line Calibration of a Digital Camera

Urbanek, M., Horaud, R., Sturm, P.

In In Proceedings of Third International Conference on 3-D Digital Imaging and Modeling, pages: 99-106, In Proceedings of Third International Conference on 3-D Digital Imaging and Modeling, June 2001 (inproceedings)

Abstract
We introduce a novel outlook on the self­calibration task, by considering images taken by a camera in motion, allowing for zooming and focusing. Apart from the complex relationship between the lens control settings and the intrinsic camera parameters, a prior off­line calibration allows to neglect the setting of focus, and to fix the principal point and aspect ratio throughout distinct views. Thus, the calibration matrix is dependent only on the zoom position. Given a fully calibrated reference view, one has only one parameter to estimate for any other view of the same scene, in order to calibrate it and to be able to perform metric reconstructions. We provide a close­form solution, and validate the reliability of the algorithm with experiments on real images. An important advantage of our method is a reduced ­ to one ­ number of critical camera configurations, associated with it. Moreover, we propose a method for computing the epipolar geometry of two views, taken from different positions and with different (spatial) resolutions; the idea is to take an appropriate third view, that is "easy" to match with the other two.

ZIP [BibTex]

ZIP [BibTex]


Extracting egomotion from optic flow: limits of accuracy and neural matched filters

Dahmen, H-J. Franz, MO. Krapp, HG.

In pages: 143-168, Springer, Berlin, 2001 (inbook)

[BibTex]

[BibTex]


Bound on the Leave-One-Out Error for Density Support Estimation using nu-SVMs

Gretton, A., Herbrich, R., Schölkopf, B., Smola, A., Rayner, P.

University of Cambridge, 2001 (techreport)

[BibTex]

[BibTex]


The pedestal effect with a pulse train and its constituent sinusoids

Henning, G., Wichmann, F., Bird, C.

Twenty-Sixth Annual Interdisciplinary Conference, 2001 (poster)

Abstract
Curves showing "threshold" contrast for detecting a signal grating as a function of the contrast of a masking grating of the same orientation, spatial frequency, and phase show a characteristic improvement in performance at masker contrasts near the contrast threshold of the unmasked signal. Depending on the percentage of correct responses used to define the threshold, the best performance can be as much as a factor of three better than the unmasked threshold obtained in the absence of any masking grating. The result is called the pedestal effect (sometimes, the dipper function). We used a 2AFC procedure to measure the effect with harmonically related sinusoids ranging from 2 to 16 c/deg - all with maskers of the same orientation, spatial frequency and phase - and with masker contrasts ranging from 0 to 50%. The curves for different spatial frequencies are identical if both the vertical axis (showing the threshold signal contrast) and the horizontal axis (showing the masker contrast) are scaled by the threshold contrast of the signal obtained with no masker. Further, a pulse train with a fundamental frequency of 2 c/deg produces a curve that is indistinguishable from that of a 2-c/deg sinusoid despite the fact that at higher masker contrasts, the pulse train contains at least 8 components all of them equally detectable. The effect of adding 1-D spatial noise is also discussed.

[BibTex]

[BibTex]


Markovian domain fingerprinting: statistical segmentation of protein sequences

Bejerano, G., Seldin, Y., Margalit, H., Tishby, N.

Bioinformatics, 17(10):927-934, 2001 (article)

PDF Web [BibTex]

PDF Web [BibTex]


Unsupervised Sequence Segmentation by a Mixture of Switching Variable Memory Markov Sources

Seldin, Y., Bejerano, G., Tishby, N.

In In the proceeding of the 18th International Conference on Machine Learning (ICML 2001), pages: 513-520, 18th International Conference on Machine Learning (ICML), 2001 (inproceedings)

Abstract
We present a novel information theoretic algorithm for unsupervised segmentation of sequences into alternating Variable Memory Markov sources. The algorithm is based on competitive learning between Markov models, when implemented as Prediction Suffix Trees (Ron et al., 1996) using the MDL principle. By applying a model clustering procedure, based on rate distortion theory combined with deterministic annealing, we obtain a hierarchical segmentation of sequences between alternating Markov sources. The algorithm seems to be self regulated and automatically avoids over segmentation. The method is applied successfully to unsupervised segmentation of multilingual texts into languages where it is able to infer correctly both the number of languages and the language switching points. When applied to protein sequence families, we demonstrate the method‘s ability to identify biologically meaningful sub-sequences within the proteins, which correspond to important functional sub-units called domains.

PDF [BibTex]

PDF [BibTex]


Feature Selection for SVMs

Weston, J., Mukherjee, S., Chapelle, O., Pontil, M., Poggio, T., Vapnik, V.

In Advances in Neural Information Processing Systems 13, pages: 668-674, (Editors: Leen, T.K. , T.G. Dietterich, V. Tresp), MIT Press, Cambridge, MA, USA, Fourteenth Annual Neural Information Processing Systems Conference (NIPS), April 2001 (inproceedings)

Abstract
We introduce a method of feature selection for Support Vector Machines. The method is based upon finding those features which minimize bounds on the leave-one-out error. This search can be efficiently performed via gradient descent. The resulting algorithms are shown to be superior to some standard feature selection algorithms on both toy data and real-life problems of face recognition, pedestrian detection and analyzing DNA microarray data.

PDF Web [BibTex]

PDF Web [BibTex]


The control structure of artificial creatures

Zhou, D., Dai, R.

Artificial Life and Robotics, 5(3), 2001, invited article (article)

Web [BibTex]

Web [BibTex]


Pattern Selection Using the Bias and Variance of Ensemble

Shin, H., Cho, S.

Journal of the Korean Institute of Industrial Engineers, 28(1):112-127, March 2001 (article)

Abstract
[Abstract]: A useful pattern is a pattern that contributes much to learning. For a classification problem those patterns near the class boundary surfaces carry more information to the classifier. For a regression problem the ones near the estimated surface carry more information. In both cases, the usefulness is defined only for those patterns either without error or with negligible error. Using only the useful patterns gives several benefits. First, computational complexity in memory and time for learning is decreased. Second, overfitting is avoided even when the learner is over-sized. Third, learning results in more stable learners. In this paper, we propose a pattern “utility index” that measures the utility of an individual pattern. The utility index is based on the bias and variance of a pattern trained by a network ensemble. In classification, the pattern with a low bias and a high variance gets a high score. In regression, on the other hand, the one with a low bias and a low variance gets a high score. Based on the distribution of the utility index, the original training set is divided into a high-score group and a low-score group. Only the high-score group is then used for training. The proposed method is tested on synthetic and real-world benchmark datasets. The proposed approach gives a better or at least similar performance.

[BibTex]

[BibTex]


Hybrid IDM/Impedance learning in human movements

Burdet, E., Teng, K., Chew, C., Peters, J., , B.

In ISHF 2001, 1, pages: 1-9, 1st International Symposium on Measurement, Analysis and Modeling of Human Functions (ISHF2001), September 2001 (inproceedings)

Abstract
In spite of motor output variability and the delay in the sensori-motor, humans routinely perform intrinsically un- stable tasks. The hybrid IDM/impedance learning con- troller presented in this paper enables skilful performance in strong stable and unstable environments. It consid- ers motor output variability identified from experimen- tal data, and contains two modules concurrently learning the endpoint force and impedance adapted to the envi- ronment. The simulations suggest how humans learn to skillfully perform intrinsically unstable tasks. Testable predictions are proposed.

PDF Web [BibTex]

PDF Web [BibTex]


Support Vector Regression for Black-Box System Identification

Gretton, A., Doucet, A., Herbrich, R., Rayner, P., Schölkopf, B.

In 11th IEEE Workshop on Statistical Signal Processing, pages: 341-344, IEEE Signal Processing Society, Piscataway, NY, USA, 11th IEEE Workshop on Statistical Signal Processing, 2001 (inproceedings)

Abstract
In this paper, we demonstrate the use of support vector regression (SVR) techniques for black-box system identification. These methods derive from statistical learning theory, and are of great theoretical and practical interest. We briefly describe the theory underpinning SVR, and compare support vector methods with other approaches using radial basis networks. Finally, we apply SVR to modeling the behaviour of a hydraulic robot arm, and show that SVR improves on previously published results.

PostScript [BibTex]

PostScript [BibTex]


An Improved Training Algorithm for Kernel Fisher Discriminants

Mika, S., Schölkopf, B., Smola, A.

In Proceedings AISTATS, pages: 98-104, (Editors: T Jaakkola and T Richardson), Morgan Kaufman, San Francisco, CA, Artificial Intelligence and Statistics (AISTATS), January 2001 (inproceedings)

Web [BibTex]

Web [BibTex]


Pattern Selection for ‘Classification’ using the Bias and Variance of Ensemble Neural Network

Shin, H., Cho, S.

In Proc. of the Korea Information Science Conference, pages: 307-309, Korea Information Science Conference, October 2001, Best Paper Award (inproceedings)

[BibTex]

[BibTex]


Bound on the Leave-One-Out Error for 2-Class Classification using nu-SVMs

Gretton, A., Herbrich, R., Schölkopf, B., Rayner, P.

University of Cambridge, 2001, Updated May 2003 (literature review expanded) (techreport)

Abstract
Three estimates of the leave-one-out error for $nu$-support vector (SV) machine binary classifiers are presented. Two of the estimates are based on the geometrical concept of the {em span}, which was introduced in the context of bounding the leave-one-out error for $C$-SV machine binary classifiers, while the third is based on optimisation over the criterion used to train the $nu$-support vector classifier. It is shown that the estimates presented herein provide informative and efficient approximations of the generalisation behaviour, in both a toy example and benchmark data sets. The proof strategies in the $nu$-SV context are also compared with those used to derive leave-one-out error estimates in the $C$-SV case.

PostScript [BibTex]

PostScript [BibTex]


Regularized principal manifolds

Smola, A., Mika, S., Schölkopf, B., Williamson, R.

Journal of Machine Learning Research, 1, pages: 179-209, June 2001 (article)

Abstract
Many settings of unsupervised learning can be viewed as quantization problems - the minimization of the expected quantization error subject to some restrictions. This allows the use of tools such as regularization from the theory of (supervised) risk minimization for unsupervised learning. This setting turns out to be closely related to principal curves, the generative topographic map, and robust coding. We explore this connection in two ways: (1) we propose an algorithm for finding principal manifolds that can be regularized in a variety of ways; and (2) we derive uniform convergence bounds and hence bounds on the learning rates of the algorithm. In particular, we give bounds on the covering numbers which allows us to obtain nearly optimal learning rates for certain types of regularization operators. Experimental results demonstrate the feasibility of the approach.

PDF [BibTex]

PDF [BibTex]


Inference Principles and Model Selection

Buhmann, J., Schölkopf, B.

(01301), Dagstuhl Seminar, 2001 (techreport)

Web [BibTex]

Web [BibTex]


Anabolic and Catabolic Gene Expression Pattern Analysis in Normal Versus Osteoarthritic Cartilage Using Complementary DNA-Array Technology

Aigner, T., Zien, A., Gehrsitz, A., Gebhard, P., McKenna, L.

Arthritis and Rheumatism, 44(12):2777-2789, December 2001 (article)

Web [BibTex]

Web [BibTex]