Header logo is ei


1999


no image
Linear programs for automatic accuracy control in regression

Smola, A., Schölkopf, B., Rätsch, G.

In Artificial Neural Networks, 1999. ICANN 99, 470, pages: 575-580 , Conference Publications , IEEE, 9th International Conference on Artificial Neural Networks, 1999 (inproceedings)

DOI [BibTex]

1999

DOI [BibTex]


no image
Classifying LEP data with support vector algorithms.

Vannerem, P., Müller, K., Smola, A., Schölkopf, B., Söldner-Rembold, S.

In Artificial Intelligence in High Energy Nuclear Physics 99, Artificial Intelligence in High Energy Nuclear Physics 99, 1999 (inproceedings)

[BibTex]

[BibTex]


no image
Generalization Bounds via Eigenvalues of the Gram matrix

Schölkopf, B., Shawe-Taylor, J., Smola, A., Williamson, R.

(99-035), NeuroCOLT, 1999 (techreport)

[BibTex]

[BibTex]


no image
Pedestal effects with periodic pulse trains

Henning, G., Wichmann, F.

Perception, 28, pages: S137, 1999 (poster)

Abstract
It is important to know for theoretical reasons how performance varies with stimulus contrast. But, for objects on CRT displays, retinal contrast is limited by the linear range of the display and the modulation transfer function of the eye. For example, with an 8 c/deg sinusoidal grating at 90% contrast, the contrast of the retinal image is barely 45%; more retinal contrast is required, however, to discriminate among theories of contrast discrimination (Wichmann, Henning and Ploghaus, 1998). The stimulus with the greatest contrast at any spatial-frequency component is a periodic pulse train which has 200% contrast at every harmonic. Such a waveform cannot, of course, be produced; the best we can do with our Mitsubishi display provides a contrast of 150% at an 8-c/deg fundamental thus producing a retinal image with about 75% contrast. The penalty of using this stimulus is that the 2nd harmonic of the retinal image also has high contrast (with an emmetropic eye, more than 60% of the contrast of the 8-c/deg fundamental ) and the mean luminance is not large (24.5 cd/m2 on our display). We have used standard 2-AFC experiments to measure the detectability of an 8-c/deg pulse train against the background of an identical pulse train of different contrasts. An unusually large improvement in detetectability was measured, the pedestal effect or "dipper," and the dipper was unusually broad. The implications of these results will be discussed.

[BibTex]

[BibTex]


no image
Apprentissage Automatique et Simplicite

Bousquet, O.

Biologische Kybernetik, 1999, In french (diplomathesis)

PostScript [BibTex]

PostScript [BibTex]


no image
Is the Hippocampus a Kalman Filter?

Bousquet, O., Balakrishnan, K., Honavar, V.

In Proceedings of the Pacific Symposium on Biocomputing, 3, pages: 619-630, Proceedings of the Pacific Symposium on Biocomputing, 1999 (inproceedings)

[BibTex]

[BibTex]


no image
A Comparison of Artificial Neural Networks and Cluster Analysis for Typing Biometrics Authentication

Maisuria, K., Ong, CS., Lai, .

In unknown, pages: 9999-9999, International Joint Conference on Neural Networks, 1999 (inproceedings)

PDF [BibTex]

PDF [BibTex]


no image
Regularized principal manifolds.

Smola, A., Williamson, R., Mika, S., Schölkopf, B.

In Lecture Notes in Artificial Intelligence, Vol. 1572, 1572, pages: 214-229 , Lecture Notes in Artificial Intelligence, (Editors: P Fischer and H-U Simon), Springer, Berlin, Germany, Computational Learning Theory: 4th European Conference, 1999 (inproceedings)

[BibTex]

[BibTex]


no image
Entropy numbers, operators and support vector kernels.

Williamson, R., Smola, A., Schölkopf, B.

In Lecture Notes in Artificial Intelligence, Vol. 1572, 1572, pages: 285-299, Lecture Notes in Artificial Intelligence, (Editors: P Fischer and H-U Simon), Springer, Berlin, Germany, Computational Learning Theory: 4th European Conference, 1999 (inproceedings)

[BibTex]

[BibTex]


no image
Sparse kernel feature analysis

Smola, A., Mangasarian, O., Schölkopf, B.

(99-04), Data Mining Institute, 1999, 24th Annual Conference of Gesellschaft f{\"u}r Klassifikation, University of Passau (techreport)

PostScript [BibTex]

PostScript [BibTex]


no image
Machine Learning and Language Acquisition: A Model of Child’s Learning of Turkish Morphophonology

Altun, Y.

Middle East Technical University, Ankara, Turkey, 1999 (mastersthesis)

[BibTex]


no image
Implications of the pedestal effect for models of contrast-processing and gain-control

Wichmann, F., Henning, G.

OSA Conference Program, pages: 62, 1999 (poster)

Abstract
Understanding contrast processing is essential for understanding spatial vision. Pedestal contrast systematically affects slopes of functions relating 2-AFC contrast discrimination performance to pedestal contrast. The slopes provide crucial information because only full sets of data allow discrimination among contrast-processing and gain-control models. Issues surrounding Weber's law will also be discussed.

[BibTex]


no image
Advances in Kernel Methods - Support Vector Learning

Schölkopf, B., Burges, C., Smola, A.

MIT Press, Cambridge, MA, 1999 (book)

[BibTex]

[BibTex]


no image
Fisher discriminant analysis with kernels

Mika, S., Rätsch, G., Weston, J., Schölkopf, B., Müller, K.

In Proceedings of the 1999 IEEE Signal Processing Society Workshop, 9, pages: 41-48, (Editors: Y-H Hu and J Larsen and E Wilson and S Douglas), IEEE, Neural Networks for Signal Processing IX, 1999 (inproceedings)

DOI [BibTex]

DOI [BibTex]


no image
Entropy numbers, operators and support vector kernels.

Williamson, R., Smola, A., Schölkopf, B.

In Advances in Kernel Methods - Support Vector Learning, pages: 127-144, (Editors: B Schölkopf and CJC Burges and AJ Smola), MIT Press, Cambridge, MA, 1999 (inbook)

[BibTex]

[BibTex]

1997


no image
Comparing support vector machines with Gaussian kernels to radial basis function classifiers

Schölkopf, B., Sung, K., Burges, C., Girosi, F., Niyogi, P., Poggio, T., Vapnik, V.

IEEE Transactions on Signal Processing, 45(11):2758-2765, November 1997 (article)

Abstract
The support vector (SV) machine is a novel type of learning machine, based on statistical learning theory, which contains polynomial classifiers, neural networks, and radial basis function (RBF) networks as special cases. In the RBF case, the SV algorithm automatically determines centers, weights, and threshold that minimize an upper bound on the expected test error. The present study is devoted to an experimental comparison of these machines with a classical approach, where the centers are determined by X-means clustering, and the weights are computed using error backpropagation. We consider three machines, namely, a classical RBF machine, an SV machine with Gaussian kernel, and a hybrid system with the centers determined by the SV method and the weights trained by error backpropagation. Our results show that on the United States postal service database of handwritten digits, the SV machine achieves the highest recognition accuracy, followed by the hybrid system. The SV approach is thus not only theoretically well-founded but also superior in a practical application.

Web DOI [BibTex]

1997

Web DOI [BibTex]


no image
The view-graph approach to visual navigation and spatial memory

Mallot, H., Franz, M., Schölkopf, B., Bülthoff, H.

In Artificial Neural Networks: ICANN ’97, pages: 751-756, (Editors: W Gerstner and A Germond and M Hasler and J-D Nicoud), Springer, Berlin, Germany, 7th International Conference on Artificial Neural Networks, October 1997 (inproceedings)

Abstract
This paper describes a purely visual navigation scheme based on two elementary mechanisms (piloting and guidance) and a graph structure combining individual navigation steps controlled by these mechanisms. In robot experiments in real environments, both mechanisms have been tested, piloting in an open environment and guidance in a maze with restricted movement opportunities. The results indicate that navigation and path planning can be brought about with these simple mechanisms. We argue that the graph of local views (snapshots) is a general and biologically plausible means of representing space and integrating the various mechanisms of map behaviour.

PDF PDF DOI [BibTex]

PDF PDF DOI [BibTex]


no image
Predicting time series with support vector machines

Müller, K., Smola, A., Rätsch, G., Schölkopf, B., Kohlmorgen, J., Vapnik, V.

In Artificial Neural Networks: ICANN’97, pages: 999-1004, (Editors: Schölkopf, B. , C.J.C. Burges, A.J. Smola), Springer, Berlin, Germany, 7th International Conference on Artificial Neural Networks , October 1997 (inproceedings)

Abstract
Support Vector Machines are used for time series prediction and compared to radial basis function networks. We make use of two different cost functions for Support Vectors: training with (i) an e insensitive loss and (ii) Huber's robust loss function and discuss how to choose the regularization parameters in these models. Two applications are considered: data from (a) a noisy (normal and uniform noise) Mackey Glass equation and (b) the Santa Fe competition (set D). In both cases Support Vector Machines show an excellent performance. In case (b) the Support Vector approach improves the best known result on the benchmark by a factor of 29%.

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Predicting time series with support vectur machines

Müller, K., Smola, A., Rätsch, G., Schölkopf, B., Kohlmorgen, J., Vapnik, V.

In Artificial neural networks: ICANN ’97, pages: 999-1004, (Editors: W Gerstner and A Germond and M Hasler and J-D Nicoud), Springer, Berlin, Germany, 7th International Conference on Artificial Neural Networks , October 1997 (inproceedings)

Abstract
Support Vector Machines are used for time series prediction and compared to radial basis function networks. We make use of two different cost functions for Support Vectors: training with (i) an e insensitive loss and (ii) Huber's robust loss function and discuss how to choose the regularization parameters in these models. Two applications are considered: data from (a) a noisy (normal and uniform noise) Mackey Glass equation and (b) the Santa Fe competition (set D). In both cases Support Vector Machines show an excellent performance. In case (b) the Support Vector approach improves the best known result on the benchmark by a factor of 29%.

PDF PDF DOI [BibTex]

PDF PDF DOI [BibTex]


no image
Kernel principal component analysis

Schölkopf, B., Smola, A., Müller, K.

In Artificial neural networks: ICANN ’97, LNCS, vol. 1327, pages: 583-588, (Editors: W Gerstner and A Germond and M Hasler and J-D Nicoud), Springer, Berlin, Germany, 7th International Conference on Artificial Neural Networks, October 1997 (inproceedings)

Abstract
A new method for performing a nonlinear form of Principal Component Analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in highdimensional feature spaces, related to input space by some nonlinear map; for instance the space of all possible d-pixel products in images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Homing by parameterized scene matching

Franz, M., Schölkopf, B., Bülthoff, H.

In Proceedings of the 4th European Conference on Artificial Life, (Eds.) P. Husbands, I. Harvey. MIT Press, Cambridge 1997, pages: 236-245, (Editors: P Husbands and I Harvey), MIT Press, Cambridge, MA, USA, 4th European Conference on Artificial Life (ECAL97), July 1997 (inproceedings)

Abstract
In visual homing tasks, animals as well as robots can compute their movements from the current view and a snapshot taken at a home position. Solving this problem exactly would require knowledge about the distances to visible landmarks, information, which is not directly available to passive vision systems. We propose a homing scheme that dispenses with accurate distance information by using parameterized disparity fields. These are obtained from an approximation that incorporates prior knowledge about perspective distortions of the visual environment. A mathematical analysis proves that the approximation does not prevent the scheme from approaching the goal with arbitrary accuracy. Mobile robot experiments are used to demonstrate the practical feasibility of the approach.

PDF [BibTex]

PDF [BibTex]


no image
Das Spiel mit dem künstlichen Leben.

Schölkopf, B.

Frankfurter Allgemeine Zeitung, Wissenschaftsbeilage, June 1997 (misc)

[BibTex]

[BibTex]


no image
Improving the accuracy and speed of support vector learning machines

Burges, C., Schölkopf, B.

In Advances in Neural Information Processing Systems 9, pages: 375-381, (Editors: M Mozer and MJ Jordan and T Petsche), MIT Press, Cambridge, MA, USA, Tenth Annual Conference on Neural Information Processing Systems (NIPS), May 1997 (inproceedings)

Abstract
Support Vector Learning Machines (SVM) are finding application in pattern recognition, regression estimation, and operator inversion for illposed problems . Against this very general backdrop any methods for improving the generalization performance, or for improving the speed in test phase of SVMs are of increasing interest. In this paper we combine two such techniques on a pattern recognition problem The method for improving generalization performance the "virtual support vector" method does so by incorporating known invariances of the problem This method achieves a drop in the error rate on 10.000 NIST test digit images of 1,4 % to 1 %. The method for improving the speed (the "reduced set" method) does so by approximating the support vector decision surface. We apply this method to achieve a factor of fifty speedup in test phase over the virtual support vector machine The combined approach yields a machine which is both 22 times faster than the original machine, and which has better generalization performance achieving 1,1 % error . The virtual support vector method is applicable to any SVM problem with known invariances The reduced set method is applicable to any support vector machine .

PDF Web [BibTex]

PDF Web [BibTex]


no image
Homing by parameterized scene matching

Franz, M., Schölkopf, B., Bülthoff, H.

(46), Max Planck Institute for Biological Cybernetics, Tübingen, Germany, Febuary 1997 (techreport)

Abstract
In visual homing tasks, animals as well as robots can compute their movements from the current view and a snapshot taken at a home position. Solving this problem exactly would require knowledge about the distances to visible landmarks, information, which is not directly available to passive vision systems. We propose a homing scheme that dispenses with accurate distance information by using parameterized disparity fields. These are obtained from an approximation that incorporates prior knowledge about perspective distortions of the visual environment. A mathematical analysis proves that the approximation does not prevent the scheme from approaching the goal with arbitrary accuracy. Mobile robot experiments are used to demonstrate the practical feasibility of the approach.

[BibTex]

[BibTex]


no image
Learning view graphs for robot navigation

Franz, M., Schölkopf, B., Georg, P., Mallot, H., Bülthoff, H.

In Proceedings of the 1st Intl. Conf. on Autonomous Agents, pages: 138-147, (Editors: Johnson, W.L.), ACM Press, New York, NY, USA, First International Conference on Autonomous Agents (AGENTS '97), Febuary 1997 (inproceedings)

Abstract
We present a purely vision-based scheme for learning a parsimonious representation of an open environment. Using simple exploration behaviours, our system constructs a graph of appropriately chosen views. To navigate between views connected in the graph, we employ a homing strategy inspired by findings of insect ethology. Simulations and robot experiments demonstrate the feasibility of the proposed approach.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Masking by plaid patterns is not explained by adaptation, simple contrast gain-control or distortion products

Wichmann, F., Tollin, D.

Investigative Ophthamology and Visual Science, 38 (4), pages: S631, 1997 (poster)

[BibTex]

[BibTex]


no image
Masking by plaid patterns: spatial frequency tuning and contrast dependency

Wichmann, F., Tollin, D.

OSA Conference Program, pages: 97, 1997 (poster)

Abstract
The detectability of horizontally orientated sinusoidal signals at different spatial-frequencies was measured in standard 2AFC - tasks in the presence of two-component plaid patterns of different orientation and contrast. The shape of the resulting masking surface provides insight into, and constrains models of, the underlying masking mechanisms.

[BibTex]

[BibTex]


no image
ATM-dependent telomere loss in aging human diploid fibroblasts and DNA damage lead to the post-translational activation of p53 protein involving poly(ADP-ribose) polymerase.

Vaziri, H., MD, .., RC, .., Davison, T., YS, .., CH, .., GG, .., Benchimol, S.

The European Molecular Biology Organization Journal, 16(19):6018-6033, 1997 (article)

Web [BibTex]

Web [BibTex]


no image
Support vector learning

Schölkopf, B.

pages: 173, Oldenbourg, München, Germany, 1997, Zugl.: Berlin, Techn. Univ., Diss., 1997 (book)

PDF GZIP [BibTex]

PDF GZIP [BibTex]

1993


no image
Presynaptic and Postsynaptic Competition in models for the Development of Neuromuscular Connections

Rasmussen, CE., Willshaw, DJ.

Biological Cybernetics, 68, pages: 409-419, 1993 (article)

Abstract
The development of the nervous system involves in many cases interactions on a local scale rather than the execution of a fully specified genetic blueprint. The problem is to discover the nature of these interactions and the factors on which they depend. The withdrawal of polyinnervation in developing muscle is an example where such competitive interactions play an important role. We examine the possible types of competition in formal models that have plausible biological implementations. By relating the behaviour of the models to the anatomical and physiological findings we show that a model that incorporates two types of competition is superior to others. Analysis suggests that the phenomenon of intrinsic withdrawal is a side effect of the competitive mechanisms rather than a separate non-competitive feature. Full scale computer simulations have been used to confirm the capabilities of this model.

PostScript [BibTex]

1993

PostScript [BibTex]


no image
Cartesian Dynamics of Simple Molecules: X Linear Quadratomics (C∞v Symmetry).

Anderson, A., Davison, T., Nagi, N., Schlueter, S.

Spectroscopy Letters, 26, pages: 509-522, 1993 (article)

[BibTex]

[BibTex]