Header logo is ei


2001


no image
Perception of Planar Shapes in Depth

Wichmann, F., Willems, B., Rosas, P., Wagemans, J.

Journal of Vision, 1(3):176, First Annual Meeting of the Vision Sciences Society (VSS), December 2001 (poster)

Abstract
We investigated the influence of the perceived 3D-orientation of planar elliptical shapes on the perception of the shapes themselves. Ellipses were projected onto the surface of a sphere and subjects were asked to indicate if the projected shapes looked as if they were a circle on the surface of the sphere. The image of the sphere was obtained from a real, (near) perfect sphere using a highly accurate digital camera (real sphere diameter 40 cm; camera-to-sphere distance 320 cm; for details see Willems et al., Perception 29, S96, 2000; Photometrics SenSys 400 digital camera with Rodenstock lens, 12-bit linear luminance resolution). Stimuli were presented monocularly on a carefully linearized Sony GDM-F500 monitor keeping the scene geometry as in the real case (sphere diameter on screen 8.2 cm; viewing distance 66 cm). Experiments were run in a darkened room using a viewing tube to minimize, as far as possible, extraneous monocular cues to depth. Three different methods were used to obtain subjects' estimates of 3D-shape: the method of adjustment, temporal 2-alternative forced choice (2AFC) and yes/no. Several results are noteworthy. First, mismatch between perceived and objective slant tended to decrease with increasing objective slant. Second, the variability of the settings, too, decreased with increasing objective slant. Finally, we comment on the results obtained using different psychophysical methods and compare our results to those obtained using a real sphere and binocular vision (Willems et al.).

Web DOI [BibTex]

2001

Web DOI [BibTex]


no image
Nonlinear blind source separation using kernel feature spaces

Harmeling, S., Ziehe, A., Kawanabe, M., Blankertz, B., Müller, K.

In ICA 2001, pages: 102-107, (Editors: Lee, T.-W. , T.P. Jung, S. Makeig, T. J. Sejnowski), Third International Workshop on Independent Component Analysis and Blind Signal Separation, December 2001 (inproceedings)

Abstract
In this work we propose a kernel-based blind source separation (BSS) algorithm that can perform nonlinear BSS for general invertible nonlinearities. For our kTDSEP algorithm we have to go through four steps: (i) adapting to the intrinsic dimension of the data mapped to feature space F, (ii) finding an orthonormal basis of this submanifold, (iii) mapping the data into the subspace of F spanned by this orthonormal basis, and (iv) applying temporal decorrelation BSS (TDSEP) to the mapped data. After demixing we get a number of irrelevant components and the original sources. To find out which ones are the components of interest, we propose a criterion that allows to identify the original sources. The excellent performance of kTDSEP is demonstrated in experiments on nonlinearly mixed speech data.

PDF [BibTex]

PDF [BibTex]


no image
Pattern Selection for ‘Regression’ using the Bias and Variance of Ensemble Network

Shin, H., Cho, S.

In Proc. of the Korean Institute of Industrial Engineers Conference, pages: 10-19, Korean Industrial Engineers Conference, November 2001 (inproceedings)

[BibTex]

[BibTex]


no image
Pattern Selection for ‘Classification’ using the Bias and Variance of Ensemble Neural Network

Shin, H., Cho, S.

In Proc. of the Korea Information Science Conference, pages: 307-309, Korea Information Science Conference, October 2001, Best Paper Award (inproceedings)

[BibTex]

[BibTex]


no image
Hybrid IDM/Impedance learning in human movements

Burdet, E., Teng, K., Chew, C., Peters, J., , B.

In ISHF 2001, 1, pages: 1-9, 1st International Symposium on Measurement, Analysis and Modeling of Human Functions (ISHF2001), September 2001 (inproceedings)

Abstract
In spite of motor output variability and the delay in the sensori-motor, humans routinely perform intrinsically un- stable tasks. The hybrid IDM/impedance learning con- troller presented in this paper enables skilful performance in strong stable and unstable environments. It consid- ers motor output variability identified from experimen- tal data, and contains two modules concurrently learning the endpoint force and impedance adapted to the envi- ronment. The simulations suggest how humans learn to skillfully perform intrinsically unstable tasks. Testable predictions are proposed.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Combining Off- and On-line Calibration of a Digital Camera

Urbanek, M., Horaud, R., Sturm, P.

In In Proceedings of Third International Conference on 3-D Digital Imaging and Modeling, pages: 99-106, In Proceedings of Third International Conference on 3-D Digital Imaging and Modeling, June 2001 (inproceedings)

Abstract
We introduce a novel outlook on the self­calibration task, by considering images taken by a camera in motion, allowing for zooming and focusing. Apart from the complex relationship between the lens control settings and the intrinsic camera parameters, a prior off­line calibration allows to neglect the setting of focus, and to fix the principal point and aspect ratio throughout distinct views. Thus, the calibration matrix is dependent only on the zoom position. Given a fully calibrated reference view, one has only one parameter to estimate for any other view of the same scene, in order to calibrate it and to be able to perform metric reconstructions. We provide a close­form solution, and validate the reliability of the algorithm with experiments on real images. An important advantage of our method is a reduced ­ to one ­ number of critical camera configurations, associated with it. Moreover, we propose a method for computing the epipolar geometry of two views, taken from different positions and with different (spatial) resolutions; the idea is to take an appropriate third view, that is "easy" to match with the other two.

ZIP [BibTex]

ZIP [BibTex]


no image
Support vector novelty detection applied to jet engine vibration spectra

Hayton, P., Schölkopf, B., Tarassenko, L., Anuzis, P.

In Advances in Neural Information Processing Systems 13, pages: 946-952, (Editors: TK Leen and TG Dietterich and V Tresp), MIT Press, Cambridge, MA, USA, 14th Annual Neural Information Processing Systems Conference (NIPS), April 2001 (inproceedings)

Abstract
A system has been developed to extract diagnostic information from jet engine carcass vibration data. Support Vector Machines applied to novelty detection provide a measure of how unusual the shape of a vibration signature is, by learning a representation of normality. We describe a novel method for Support Vector Machines of including information from a second class for novelty detection and give results from the application to Jet Engine vibration analysis.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Four-legged Walking Gait Control Using a Neuromorphic Chip Interfaced to a Support Vector Learning Algorithm

Still, S., Schölkopf, B., Hepp, K., Douglas, R.

In Advances in Neural Information Processing Systems 13, pages: 741-747, (Editors: TK Leen and TG Dietterich and V Tresp), MIT Press, Cambridge, MA, USA, 14th Annual Neural Information Processing Systems Conference (NIPS), April 2001 (inproceedings)

Abstract
To control the walking gaits of a four-legged robot we present a novel neuromorphic VLSI chip that coordinates the relative phasing of the robot's legs similar to how spinal Central Pattern Generators are believed to control vertebrate locomotion [3]. The chip controls the leg movements by driving motors with time varying voltages which are the outputs of a small network of coupled oscillators. The characteristics of the chip's output voltages depend on a set of input parameters. The relationship between input parameters and output voltages can be computed analytically for an idealized system. In practice, however, this ideal relationship is only approximately true due to transistor mismatch and offsets.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Algorithmic Stability and Generalization Performance

Bousquet, O., Elisseeff, A.

In Advances in Neural Information Processing Systems 13, pages: 196-202, (Editors: Leen, T.K. , T.G. Dietterich, V. Tresp), MIT Press, Cambridge, MA, USA, Fourteenth Annual Neural Information Processing Systems Conference (NIPS), April 2001 (inproceedings)

Abstract
We present a novel way of obtaining PAC-style bounds on the generalization error of learning algorithms, explicitly using their stability properties. A {\em stable} learner being one for which the learned solution does not change much for small changes in the training set. The bounds we obtain do not depend on any measure of the complexity of the hypothesis space (e.g. VC dimension) but rather depend on how the learning algorithm searches this space, and can thus be applied even when the VC dimension in infinite. We demonstrate that regularization networks possess the required stability property and apply our method to obtain new bounds on their generalization performance.

PDF Web [BibTex]

PDF Web [BibTex]


no image
The Kernel Trick for Distances

Schölkopf, B.

In Advances in Neural Information Processing Systems 13, pages: 301-307, (Editors: TK Leen and TG Dietterich and V Tresp), MIT Press, Cambridge, MA, USA, 14th Annual Neural Information Processing Systems Conference (NIPS), April 2001 (inproceedings)

Abstract
A method is described which, like the kernel trick in support vector machines (SVMs), lets us generalize distance-based algorithms to operate in feature spaces, usually nonlinearly related to the input space. This is done by identifying a class of kernels which can be represented as norm-based distances in Hilbert spaces. It turns out that the common kernel algorithms, such as SVMs and kernel PCA, are actually really distance based algorithms and can be run with that class of kernels, too. As well as providing a useful new insight into how these algorithms work, the present work can form the basis for conceiving new algorithms.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Vicinal Risk Minimization

Chapelle, O., Weston, J., Bottou, L., Vapnik, V.

In Advances in Neural Information Processing Systems 13, pages: 416-422, (Editors: Leen, T.K. , T.G. Dietterich, V. Tresp), MIT Press, Cambridge, MA, USA, Fourteenth Annual Neural Information Processing Systems Conference (NIPS) , April 2001 (inproceedings)

Abstract
The Vicinal Risk Minimization principle establishes a bridge between generative models and methods derived from the Structural Risk Minimization Principle such as Support Vector Machines or Statistical Regularization. We explain how VRM provides a framework which integrates a number of existing algorithms, such as Parzen windows, Support Vector Machines, Ridge Regression, Constrained Logistic Classifiers and Tangent-Prop. We then show how the approach implies new algorithms for solving problems usually associated with generative models. New algorithms are described for dealing with pattern recognition problems with very different pattern distributions and dealing with unlabeled data. Preliminary empirical results are presented.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Feature Selection for SVMs

Weston, J., Mukherjee, S., Chapelle, O., Pontil, M., Poggio, T., Vapnik, V.

In Advances in Neural Information Processing Systems 13, pages: 668-674, (Editors: Leen, T.K. , T.G. Dietterich, V. Tresp), MIT Press, Cambridge, MA, USA, Fourteenth Annual Neural Information Processing Systems Conference (NIPS), April 2001 (inproceedings)

Abstract
We introduce a method of feature selection for Support Vector Machines. The method is based upon finding those features which minimize bounds on the leave-one-out error. This search can be efficiently performed via gradient descent. The resulting algorithms are shown to be superior to some standard feature selection algorithms on both toy data and real-life problems of face recognition, pedestrian detection and analyzing DNA microarray data.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Occam’s Razor

Rasmussen, CE., Ghahramani, Z.

In Advances in Neural Information Processing Systems 13, pages: 294-300, (Editors: Leen, T.K. , T.G. Dietterich, V. Tresp), MIT Press, Cambridge, MA, USA, Fourteenth Annual Neural Information Processing Systems Conference (NIPS), April 2001 (inproceedings)

Abstract
The Bayesian paradigm apparently only sometimes gives rise to Occam's Razor; at other times very large models perform well. We give simple examples of both kinds of behaviour. The two views are reconciled when measuring complexity of functions, rather than of the machinery used to implement them. We analyze the complexity of functions for some linear in the parameter models that are equivalent to Gaussian Processes, and always find Occam's Razor at work.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Plaid maskers revisited: asymmetric plaids

Wichmann, F.

pages: 57, 4. T{\"u}binger Wahrnehmungskonferenz (TWK), March 2001 (poster)

Abstract
A large number of psychophysical and physiological experiments suggest that luminance patterns are independently analysed in channels responding to different bands of spatial frequency. There are, however, interactions among stimuli falling well outside the usual estimates of channels' bandwidths. Derrington & Henning (1989) first reported that, in 2-AFC sinusoidal-grating detection, plaid maskers, whose components are oriented symmetrically about the signal orientation, cause a substantially larger threshold elevation than would be predicted from their sinusoidal constituents alone. Wichmann & Tollin (1997a,b) and Wichmann & Henning (1998) confirmed and extended the original findings, measuring masking as a function of presentation time and plaid mask contrast. Here I investigate masking using plaid patterns whose components are asymmetrically positioned about the signal orientation. Standard temporal 2-AFC pattern discrimination experiments were conducted using plaid patterns and oblique sinusoidal gratings as maskers, and horizontally orientated sinusoidal gratings as signals. Signal and maskers were always interleaved on the display (refresh rate 152 Hz). As in the case of the symmetrical plaid maskers, substantial masking was observed for many of the asymmetrical plaids. Masking is neither a straightforward function of the plaid's constituent sinusoidal components nor of the periodicity of the luminance beats between components. These results cause problems for the notion that, even for simple stimuli, detection and discrimination are based on the outputs of channels tuned to limited ranges of spatial frequency and orientation, even if a limited set of nonlinear interactions between these channels is allowed.

Web [BibTex]

Web [BibTex]


no image
An Improved Training Algorithm for Kernel Fisher Discriminants

Mika, S., Schölkopf, B., Smola, A.

In Proceedings AISTATS, pages: 98-104, (Editors: T Jaakkola and T Richardson), Morgan Kaufman, San Francisco, CA, Artificial Intelligence and Statistics (AISTATS), January 2001 (inproceedings)

Web [BibTex]

Web [BibTex]


no image
Nonstationary Signal Classification using Support Vector Machines

Gretton, A., Davy, M., Doucet, A., Rayner, P.

In 11th IEEE Workshop on Statistical Signal Processing, pages: 305-305, 11th IEEE Workshop on Statistical Signal Processing, 2001 (inproceedings)

Abstract
In this paper, we demonstrate the use of support vector (SV) techniques for the binary classification of nonstationary sinusoidal signals with quadratic phase. We briefly describe the theory underpinning SV classification, and introduce the Cohen's group time-frequency representation, which is used to process the non-stationary signals so as to define the classifier input space. We show that the SV classifier outperforms alternative classification methods on this processed data.

PostScript [BibTex]

PostScript [BibTex]


no image
Enhanced User Authentication through Typing Biometrics with Artificial Neural Networks and K-Nearest Neighbor Algorithm

Wong, FWMH., Supian, ASM., Ismail, AF., Lai, WK., Ong, CS.

In 2001 (inproceedings)

[BibTex]

[BibTex]


no image
Predicting the Nonlinear Dynamics of Biological Neurons using Support Vector Machines with Different Kernels

Frontzek, T., Lal, TN., Eckmiller, R.

In Proceedings of the International Joint Conference on Neural Networks (IJCNN'2001) Washington DC, 2, pages: 1492-1497, Proceedings of the International Joint Conference on Neural Networks (IJCNN'2001) Washington DC, 2001 (inproceedings)

Abstract
Based on biological data we examine the ability of Support Vector Machines (SVMs) with gaussian, polynomial and tanh-kernels to learn and predict the nonlinear dynamics of single biological neurons. We show that SVMs for regression learn the dynamics of the pyloric dilator neuron of the australian crayfish, and we determine the optimal SVM parameters with regard to the test error. Compared to conventional RBF networks and MLPs, SVMs with gaussian kernels learned faster and performed a better iterated one-step-ahead prediction with regard to training and test error. From a biological point of view SVMs are especially better in predicting the most important part of the dynamics, where the membranpotential is driven by superimposed synaptic inputs to the threshold for the oscillatory peak.

PDF [BibTex]

PDF [BibTex]


no image
Computationally Efficient Face Detection

Romdhani, S., Torr, P., Schölkopf, B., Blake, A.

In Computer Vision, ICCV 2001, vol. 2, (73):695-700, IEEE, 8th International Conference on Computer Vision, 2001 (inproceedings)

DOI [BibTex]

DOI [BibTex]


no image
Design and Verification of Supervisory Controller of High-Speed Train

Yoo, SP., Lee, DY., Son, HI.

In IEEE International Symposium on Industrial Electronics, pages: 1290-1295, IEEE Operations Center, Piscataway, NJ, USA, IEEE International Symposium on Industrial Electronics (ISIE), 2001 (inproceedings)

Abstract
A high-level controller, supervisory controller, is required to monitor, control, and diagnose the low-level controllers of the high-speed train. The supervisory controller controls low-level controllers by monitoring input and output signals, events, and the high-speed train can be modeled as a discrete event system (DES). The high-speed train is modeled with automata, and the high-level control specification is defined. The supervisory controller is designed using the high-speed train model and the control specification. The designed supervisory controller is verified and evaluated with simulation using a computer-aided software engineering (CASE) tool, Object GEODE

Web DOI [BibTex]

Web DOI [BibTex]


no image
Towards Learning Path Planning for Solving Complex Robot Tasks

Frontzek, T., Lal, TN., Eckmiller, R.

In Proceedings of the International Conference on Artificial Neural Networks (ICANN'2001) Vienna, pages: 943-950, Proceedings of the International Conference on Artificial Neural Networks (ICANN'2001) Vienna, 2001 (inproceedings)

Abstract
For solving complex robot tasks it is necessary to incorporate path planning methods that are able to operate within different high-dimensional configuration spaces containing an unknown number of obstacles. Based on Advanced A*-algorithm (AA*) using expansion matrices instead of a simple expansion logic we propose a further improvement of AA* enabling the capability to learn directly from sample planning tasks. This is done by inserting weights into the expansion matrix which are modified according to a special learning rule. For an examplary planning task we show that Adaptive AA* learns movement vectors which allow larger movements than the initial ones into well-defined directions of the configuration space. Compared to standard approaches planning times are clearly reduced.

PDF [BibTex]

PDF [BibTex]


no image
Learning to predict the leave-one-out error of kernel based classifiers

Tsuda, K., Rätsch, G., Mika, S., Müller, K.

In International Conference on Artificial Neural Networks, ICANN'01, (LNCS 2130):331-338, (Editors: G. Dorffner, H. Bischof and K. Hornik), International Conference on Artificial Neural Networks, ICANN'01, 2001 (inproceedings)

PDF [BibTex]

PDF [BibTex]


no image
A kernel approach for vector quantization with guaranteed distortion bounds

Tipping, M., Schölkopf, B.

In Artificial Intelligence and Statistics, pages: 129-134, (Editors: T Jaakkola and T Richardson), Morgan Kaufmann, San Francisco, CA, USA, 8th International Conference on Artificial Intelligence and Statistics (AI and STATISTICS), 2001 (inproceedings)

[BibTex]

[BibTex]


no image
Tracking a Small Set of Experts by Mixing Past Posteriors

Bousquet, O., Warmuth, M.

In Proceedings of the 14th Annual Conference on Computational Learning Theory, Lecture Notes in Computer Science, 2111, pages: 31-47, Proceedings of the 14th Annual Conference on Computational Learning Theory, Lecture Notes in Computer Science, 2001 (inproceedings)

Abstract
In this paper, we examine on-line learning problems in which the target concept is allowed to change over time. In each trial a master algorithm receives predictions from a large set of $n$ experts. Its goal is to predict almost as well as the best sequence of such experts chosen off-line by partitioning the training sequence into $k+1$ sections and then choosing the best expert for each section. We build on methods developed by Herbster and Warmuth and consider an open problem posed by Freund where the experts in the best partition are from a small pool of size $m$. Since $k>>m$ the best expert shifts back and forth between the experts of the small pool. We propose algorithms that solve this open problem by mixing the past posteriors maintained by the master algorithm. We relate the number of bits needed for encoding the best partition to the loss bounds of the algorithms. Instead of paying $\log n$ for choosing the best expert in each section we first pay $\log {n\choose m}$ bits in the bounds for identifying the pool of $m$ experts and then $\log m$ bits per new section. In the bounds we also pay twice for encoding the boundaries of the sections.

PDF PostScript [BibTex]

PDF PostScript [BibTex]


no image
Learning and Prediction of the Nonlinear Dynamics of Biological Neurons with Support Vector Machines

Frontzek, T., Lal, TN., Eckmiller, R.

In Proceedings of the International Conference on Artificial Neural Networks (ICANN'2001), pages: 390-398, Proceedings of the International Conference on Artificial Neural Networks (ICANN'2001), 2001 (inproceedings)

Abstract
Based on biological data we examine the ability of Support Vector Machines (SVMs) with gaussian kernels to learn and predict the nonlinear dynamics of single biological neurons. We show that SVMs for regression learn the dynamics of the pyloric dilator neuron of the australian crayfish, and we determine the optimal SVM parameters with regard to the test error. Compared to conventional RBF networks, SVMs learned faster and performed a better iterated one-step-ahead prediction with regard to training and test error. From a biological point of view SVMs are especially better in predicting the most important part of the dynamics, where the membranpotential is driven by superimposed synaptic inputs to the threshold for the oscillatory peak.

PDF [BibTex]

PDF [BibTex]


no image
Estimating a Kernel Fisher Discriminant in the Presence of Label Noise

Lawrence, N., Schölkopf, B.

In 18th International Conference on Machine Learning, pages: 306-313, (Editors: CE Brodley and A Pohoreckyj Danyluk), Morgan Kaufmann , San Fransisco, CA, USA, 18th International Conference on Machine Learning (ICML), 2001 (inproceedings)

Web [BibTex]

Web [BibTex]


no image
A Generalized Representer Theorem

Schölkopf, B., Herbrich, R., Smola, A.

In Lecture Notes in Computer Science, Vol. 2111, (2111):416-426, LNCS, (Editors: D Helmbold and R Williamson), Springer, Berlin, Germany, Annual Conference on Computational Learning Theory (COLT/EuroCOLT), 2001 (inproceedings)

[BibTex]

[BibTex]


no image
The pedestal effect with a pulse train and its constituent sinusoids

Henning, G., Wichmann, F., Bird, C.

Twenty-Sixth Annual Interdisciplinary Conference, 2001 (poster)

Abstract
Curves showing "threshold" contrast for detecting a signal grating as a function of the contrast of a masking grating of the same orientation, spatial frequency, and phase show a characteristic improvement in performance at masker contrasts near the contrast threshold of the unmasked signal. Depending on the percentage of correct responses used to define the threshold, the best performance can be as much as a factor of three better than the unmasked threshold obtained in the absence of any masking grating. The result is called the pedestal effect (sometimes, the dipper function). We used a 2AFC procedure to measure the effect with harmonically related sinusoids ranging from 2 to 16 c/deg - all with maskers of the same orientation, spatial frequency and phase - and with masker contrasts ranging from 0 to 50%. The curves for different spatial frequencies are identical if both the vertical axis (showing the threshold signal contrast) and the horizontal axis (showing the masker contrast) are scaled by the threshold contrast of the signal obtained with no masker. Further, a pulse train with a fundamental frequency of 2 c/deg produces a curve that is indistinguishable from that of a 2-c/deg sinusoid despite the fact that at higher masker contrasts, the pulse train contains at least 8 components all of them equally detectable. The effect of adding 1-D spatial noise is also discussed.

[BibTex]

[BibTex]


no image
Unsupervised Segmentation and Classification of Mixtures of Markovian Sources

Seldin, Y., Bejerano, G., Tishby, N.

In The 33rd Symposium on the Interface of Computing Science and Statistics (Interface 2001 - Frontiers in Data Mining and Bioinformatics), pages: 1-15, 33rd Symposium on the Interface of Computing Science and Statistics (Interface - Frontiers in Data Mining and Bioinformatics), 2001 (inproceedings)

Abstract
We describe a novel algorithm for unsupervised segmentation of sequences into alternating Variable Memory Markov sources, first presented in [SBT01]. The algorithm is based on competitive learning between Markov models, when implemented as Prediction Suffix Trees [RST96] using the MDL principle. By applying a model clustering procedure, based on rate distortion theory combined with deterministic annealing, we obtain a hierarchical segmentation of sequences between alternating Markov sources. The method is applied successfully to unsupervised segmentation of multilingual texts into languages where it is able to infer correctly both the number of languages and the language switching points. When applied to protein sequence families (results of the [BSMT01] work), we demonstrate the method‘s ability to identify biologically meaningful sub-sequences within the proteins, which correspond to signatures of important functional sub-units called domains. Our approach to proteins classification (through the obtained signatures) is shown to have both conceptual and practical advantages over the currently used methods.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Support Vector Regression for Black-Box System Identification

Gretton, A., Doucet, A., Herbrich, R., Rayner, P., Schölkopf, B.

In 11th IEEE Workshop on Statistical Signal Processing, pages: 341-344, IEEE Signal Processing Society, Piscataway, NY, USA, 11th IEEE Workshop on Statistical Signal Processing, 2001 (inproceedings)

Abstract
In this paper, we demonstrate the use of support vector regression (SVR) techniques for black-box system identification. These methods derive from statistical learning theory, and are of great theoretical and practical interest. We briefly describe the theory underpinning SVR, and compare support vector methods with other approaches using radial basis networks. Finally, we apply SVR to modeling the behaviour of a hydraulic robot arm, and show that SVR improves on previously published results.

PostScript [BibTex]

PostScript [BibTex]


no image
Unsupervised Sequence Segmentation by a Mixture of Switching Variable Memory Markov Sources

Seldin, Y., Bejerano, G., Tishby, N.

In In the proceeding of the 18th International Conference on Machine Learning (ICML 2001), pages: 513-520, 18th International Conference on Machine Learning (ICML), 2001 (inproceedings)

Abstract
We present a novel information theoretic algorithm for unsupervised segmentation of sequences into alternating Variable Memory Markov sources. The algorithm is based on competitive learning between Markov models, when implemented as Prediction Suffix Trees (Ron et al., 1996) using the MDL principle. By applying a model clustering procedure, based on rate distortion theory combined with deterministic annealing, we obtain a hierarchical segmentation of sequences between alternating Markov sources. The algorithm seems to be self regulated and automatically avoids over segmentation. The method is applied successfully to unsupervised segmentation of multilingual texts into languages where it is able to infer correctly both the number of languages and the language switching points. When applied to protein sequence families, we demonstrate the method‘s ability to identify biologically meaningful sub-sequences within the proteins, which correspond to important functional sub-units called domains.

PDF [BibTex]

PDF [BibTex]


no image
Kernel Machine Based Learning for Multi-View Face Detection and Pose Estimation

Cheng, Y., Fu, Q., Gu, L., Li, S., Schölkopf, B., Zhang, H.

In Proceedings Computer Vision, 2001, Vol. 2, pages: 674-679, IEEE Computer Society, 8th International Conference on Computer Vision (ICCV), 2001 (inproceedings)

DOI [BibTex]

DOI [BibTex]


no image
Modeling the Dynamics of Individual Neurons of the Stomatogastric Networks with Support Vector Machines

Frontzek, T., Gutzen, C., Lal, TN., Heinzel, H-G., Eckmiller, R., Böhm, H.

Abstract Proceedings of the 6th International Congress of Neuroethology (ICN'2001) Bonn, abstract 404, 2001 (poster)

Abstract
In small rhythmic active networks timing of individual neurons is crucial for generating different spatial-temporal motor patterns. Switching of one neuron between different rhythms can cause transition between behavioral modes. In order to understand the dynamics of rhythmically active neurons we analyzed the oscillatory membranpotential of a pacemaker neuron and used different neural network models to predict dynamics of its time series. In a first step we have trained conventional RBF networks and Support Vector Machines (SVMs) using gaussian kernels with intracellulary recordings of the pyloric dilatator neuron in the Australian crayfish, Cherax destructor albidus. As a rule SVMs were able to learn the nonlinear dynamics of pyloric neurons faster (e.g. 15s) than RBF networks (e.g. 309s) under the same hardware conditions. After training SVMs performed a better iterated one-step-ahead prediction of time series in the pyloric dilatator neuron with regard to test error and error sum. The test error decreased with increasing number of support vectors. The best SVM used 196 support vectors and produced a test error of 0.04622 as opposed to the best RBF with 0.07295 using 26 RBF-neurons. In pacemaker neuron PD the timepoint at which the membranpotential will cross threshold for generation of its oscillatory peak is most important for determination of the test error. Interestingly SVMs are especially better in predicting this important part of the membranpotential which is superimposed by various synaptic inputs, which drive the membranpotential to its threshold.

[BibTex]

[BibTex]

1996


no image
Quality Prediction of Steel Products using Neural Networks

Shin, H., Jhee, W.

In Proc. of the Korean Expert System Conference, pages: 112-124, Korean Expert System Society Conference, November 1996 (inproceedings)

[BibTex]

1996

[BibTex]


no image
Comparison of view-based object recognition algorithms using realistic 3D models

Blanz, V., Schölkopf, B., Bülthoff, H., Burges, C., Vapnik, V., Vetter, T.

In Artificial Neural Networks: ICANN 96, LNCS, vol. 1112, pages: 251-256, Lecture Notes in Computer Science, (Editors: C von der Malsburg and W von Seelen and JC Vorbrüggen and B Sendhoff), Springer, Berlin, Germany, 6th International Conference on Artificial Neural Networks, July 1996 (inproceedings)

Abstract
Two view-based object recognition algorithms are compared: (1) a heuristic algorithm based on oriented filters, and (2) a support vector learning machine trained on low-resolution images of the objects. Classification performance is assessed using a high number of images generated by a computer graphics system under precisely controlled conditions. Training- and test-images show a set of 25 realistic three-dimensional models of chairs from viewing directions spread over the upper half of the viewing sphere. The percentage of correct identification of all 25 objects is measured.

PDF PDF DOI [BibTex]

PDF PDF DOI [BibTex]


no image
Incorporating invariances in support vector learning machines

Schölkopf, B., Burges, C., Vapnik, V.

In Artificial Neural Networks: ICANN 96, LNCS vol. 1112, pages: 47-52, (Editors: C von der Malsburg and W von Seelen and JC Vorbrüggen and B Sendhoff), Springer, Berlin, Germany, 6th International Conference on Artificial Neural Networks, July 1996, volume 1112 of Lecture Notes in Computer Science (inproceedings)

Abstract
Developed only recently, support vector learning machines achieve high generalization ability by minimizing a bound on the expected test error; however, so far there existed no way of adding knowledge about invariances of a classification problem at hand. We present a method of incorporating prior knowledge about transformation invariances by applying transformations to support vectors, the training examples most critical for determining the classification boundary.

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
A practical Monte Carlo implementation of Bayesian learning

Rasmussen, CE.

In Advances in Neural Information Processing Systems 8, pages: 598-604, (Editors: Touretzky, D.S. , M.C. Mozer, M.E. Hasselmo), MIT Press, Cambridge, MA, USA, Ninth Annual Conference on Neural Information Processing Systems (NIPS), June 1996 (inproceedings)

Abstract
A practical method for Bayesian training of feed-forward neural networks using sophisticated Monte Carlo methods is presented and evaluated. In reasonably small amounts of computer time this approach outperforms other state-of-the-art methods on 5 datalimited tasks from real world domains.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Gaussian Processes for Regression

Williams, CKI., Rasmussen, CE.

In Advances in neural information processing systems 8, pages: 514-520, (Editors: Touretzky, D.S. , M.C. Mozer, M.E. Hasselmo), MIT Press, Cambridge, MA, USA, Ninth Annual Conference on Neural Information Processing Systems (NIPS), June 1996 (inproceedings)

Abstract
The Bayesian analysis of neural networks is difficult because a simple prior over weights implies a complex prior over functions. We investigate the use of a Gaussian process prior over functions, which permits the predictive Bayesian analysis for fixed values of hyperparameters to be carried out exactly using matrix operations. Two methods, using optimization and averaging (via Hybrid Monte Carlo) over hyperparameters have been tested on a number of challenging problems and have produced excellent results.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Aktives Erwerben eines Ansichtsgraphen zur diskreten Repräsentation offener Umwelten.

Franz, M., Schölkopf, B., Mallot, H., Bülthoff, H.

Fortschritte der K{\"u}nstlichen Intelligenz, pages: 138-147, (Editors: M. Thielscher and S.-E. Bornscheuer), 1996 (poster)

PDF PostScript [BibTex]

PDF PostScript [BibTex]


no image
Does motion-blur facilitate motion detection ?

Wichmann, F., Henning, G.

OSA Conference Program, pages: S127, 1996 (poster)

Abstract
Retinal-image motion induces the perceptual loss of high spatial-frequency content - motion blur - that affects broadband stimuli. The relative detectability of motion blur and motion itself, measured in 2-AFC experiments, shows that, although the blur associated with motion can be detected, motion itself is the more effective cue.

[BibTex]

[BibTex]