Header logo is ei


2017


no image
Context-Driven Movement Primitive Adaptation

Wilbers, D., Lioutikov, R., Peters, J.

IEEE International Conference on Robotics and Automation (ICRA), pages: 3469-3475, IEEE, May 2017 (conference)

DOI Project Page [BibTex]

2017

DOI Project Page [BibTex]


no image
A Learning-based Shared Control Architecture for Interactive Task Execution

Farraj, F. B., Osa, T., Pedemonte, N., Peters, J., Neumann, G., Giordano, P.

IEEE International Conference on Robotics and Automation (ICRA), pages: 329-335, IEEE, May 2017 (conference)

DOI Project Page Project Page [BibTex]

DOI Project Page Project Page [BibTex]


no image
Frequency Peak Features for Low-Channel Classification in Motor Imagery Paradigms

Jayaram, V., Schölkopf, B., Grosse-Wentrup, M.

Proceedings of the 8th International IEEE/EMBS Conference on Neural Engineering (NER), pages: 321-324, May 2017 (conference)

DOI [BibTex]

DOI [BibTex]


no image
Empowered skills

Gabriel, A., Akrour, R., Peters, J., Neumann, G.

IEEE International Conference on Robotics and Automation (ICRA), pages: 6435-6441, IEEE, May 2017 (conference)

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Layered direct policy search for learning hierarchical skills

End, F., Akrour, R., Peters, J., Neumann, G.

IEEE International Conference on Robotics and Automation (ICRA), pages: 6442-6448, IEEE, May 2017 (conference)

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic

Gu, S., Lillicrap, T., Ghahramani, Z., Turner, R. E., Levine, S.

Proceedings International Conference on Learning Representations (ICLR), OpenReviews.net, International Conference on Learning Representations, April 2017 (conference)

PDF link (url) Project Page [BibTex]

PDF link (url) Project Page [BibTex]


no image
Categorical Reparametrization with Gumbel-Softmax

Jang, E., Gu, S., Poole, B.

Proceedings International Conference on Learning Representations 2017, OpenReviews.net, International Conference on Learning Representations, April 2017 (conference)

link (url) [BibTex]

link (url) [BibTex]


no image
DeepCoder: Learning to Write Programs

Balog, M., Gaunt, A. L., Brockschmidt, M., Nowozin, S., Tarlow, D.

Proceedings International Conference on Learning Representations 2017, OpenReviews.net, International Conference on Learning Representations, April 2017 (conference)

Arxiv link (url) Project Page [BibTex]

Arxiv link (url) Project Page [BibTex]


Thumb xl reliability icon
Distilling Information Reliability and Source Trustworthiness from Digital Traces

Tabibian, B., Valera, I., Farajtabar, M., Song, L., Schölkopf, B., Gomez Rodriguez, M.

Proceedings of the 26th International Conference on World Wide Web (WWW), pages: 847-855, (Editors: Barrett, R., Cummings, R., Agichtein, E. and Gabrilovich, E. ), ACM, April 2017 (conference)

Project DOI Project Page Project Page [BibTex]

Project DOI Project Page Project Page [BibTex]


no image
Local Group Invariant Representations via Orbit Embeddings

Raj, A., Kumar, A., Mroueh, Y., Fletcher, T., Schölkopf, B.

Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (AISTATS), 54, pages: 1225-1235, Proceedings of Machine Learning Research, (Editors: Aarti Singh and Jerry Zhu), April 2017 (conference)

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Pre-Movement Contralateral EEG Low Beta Power Is Modulated with Motor Adaptation Learning

Ozdenizci, O., Yalcin, M., Erdogan, A., Patoglu, V., Grosse-Wentrup, M., Cetin, M.

International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages: 934-938, March 2017 (conference)

DOI [BibTex]

DOI [BibTex]


no image
Automatic detection of motion artifacts in MR images using CNNs

Meding, K., Loktyushin, A., Hirsch, M.

42nd IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages: 811-815, March 2017 (conference)

DOI [BibTex]

DOI [BibTex]


no image
Catching heuristics are optimal control policies

Belousov, B., Neumann, G., Rothkopf, C., Peters, J.

Proceedings of the Thirteenth Karniel Computational Motor Control Workshop, March 2017 (conference)

link (url) [BibTex]

link (url) [BibTex]


no image
DiSMEC – Distributed Sparse Machines for Extreme Multi-label Classification

Babbar, R., Schölkopf, B.

Proceedings of the Tenth ACM International Conference on Web Search and Data Mining (WSDM), pages: 721-729, Febuary 2017 (conference)

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Policy Search with High-Dimensional Context Variables

Tangkaratt, V., van Hoof, H., Parisi, S., Neumann, G., Peters, J., Sugiyama, M.

Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI), pages: 2632-2638, (Editors: Satinder P. Singh and Shaul Markovitch), AAAI Press, Febuary 2017 (conference)

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Iterative Feedback-basierte Korrekturstrategien beim Bewegungslernen von Mensch-Roboter-Dyaden

Ewerton, M., Kollegger, G., Maeda, G., Wiemeyer, J., Peters, J.

In DVS Sportmotorik 2017, 2017 (inproceedings)

link (url) [BibTex]

link (url) [BibTex]


no image
BIMROB - Bidirectional Interaction between human and robot for the learning of movements - Robot trains human - Human trains robot

Kollegger, G., Wiemeyer, J., Ewerton, M., Peters, J.

In Inovation & Technologie im Sport - 23. Sportwissenschaftlicher Hochschultag der deutschen Vereinigung für Sportwissenschaft, pages: 179, (Editors: A. Schwirtz, F. Mess, Y. Demetriou & V. Senner ), Czwalina-Feldhaus, 2017 (inproceedings)

[BibTex]

[BibTex]


no image
BIMROB – Bidirektionale Interaktion von Mensch und Roboter beim Bewegungslernen

Wiemeyer, J., Peters, J., Kollegger, G., Ewerton, M.

DVS Sportmotorik 2017, 2017 (conference)

link (url) [BibTex]

link (url) [BibTex]

2002


no image
Gender Classification of Human Faces

Graf, A., Wichmann, F.

In Biologically Motivated Computer Vision, pages: 1-18, (Editors: Bülthoff, H. H., S.W. Lee, T. A. Poggio and C. Wallraven), Springer, Berlin, Germany, Second International Workshop on Biologically Motivated Computer Vision (BMCV), November 2002 (inproceedings)

Abstract
This paper addresses the issue of combining pre-processing methods—dimensionality reduction using Principal Component Analysis (PCA) and Locally Linear Embedding (LLE)—with Support Vector Machine (SVM) classification for a behaviorally important task in humans: gender classification. A processed version of the MPI head database is used as stimulus set. First, summary statistics of the head database are studied. Subsequently the optimal parameters for LLE and the SVM are sought heuristically. These values are then used to compare the original face database with its processed counterpart and to assess the behavior of a SVM with respect to changes in illumination and perspective of the face images. Overall, PCA was superior in classification performance and allowed linear separability.

PDF PDF DOI [BibTex]

2002

PDF PDF DOI [BibTex]


no image
Insect-Inspired Estimation of Self-Motion

Franz, MO., Chahl, JS.

In Biologically Motivated Computer Vision, (2525):171-180, LNCS, (Editors: Bülthoff, H.H. , S.W. Lee, T.A. Poggio, C. Wallraven), Springer, Berlin, Germany, Second International Workshop on Biologically Motivated Computer Vision (BMCV), November 2002 (inproceedings)

Abstract
The tangential neurons in the fly brain are sensitive to the typical optic flow patterns generated during self-motion. In this study, we examine whether a simplified linear model of these neurons can be used to estimate self-motion from the optic flow. We present a theory for the construction of an optimal linear estimator incorporating prior knowledge about the environment. The optimal estimator is tested on a gantry carrying an omnidirectional vision sensor. The experiments show that the proposed approach leads to accurate and robust estimates of rotation rates, whereas translation estimates turn out to be less reliable.

PDF PDF DOI [BibTex]

PDF PDF DOI [BibTex]


no image
Combining sensory Information to Improve Visualization

Ernst, M., Banks, M., Wichmann, F., Maloney, L., Bülthoff, H.

In Proceedings of the Conference on Visualization ‘02 (VIS ‘02), pages: 571-574, (Editors: Moorhead, R. , M. Joy), IEEE, Piscataway, NJ, USA, IEEE Conference on Visualization (VIS '02), October 2002 (inproceedings)

Abstract
Seemingly effortlessly the human brain reconstructs the three-dimensional environment surrounding us from the light pattern striking the eyes. This seems to be true across almost all viewing and lighting conditions. One important factor for this apparent easiness is the redundancy of information provided by the sensory organs. For example, perspective distortions, shading, motion parallax, or the disparity between the two eyes' images are all, at least partly, redundant signals which provide us with information about the three-dimensional layout of the visual scene. Our brain uses all these different sensory signals and combines the available information into a coherent percept. In displays visualizing data, however, the information is often highly reduced and abstracted, which may lead to an altered perception and therefore a misinterpretation of the visualized data. In this panel we will discuss mechanisms involved in the combination of sensory information and their implications for simulations using computer displays, as well as problems resulting from current display technology such as cathode-ray tubes.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Sampling Techniques for Kernel Methods

Achlioptas, D., McSherry, F., Schölkopf, B.

In Advances in neural information processing systems 14 , pages: 335-342, (Editors: TG Dietterich and S Becker and Z Ghahramani), MIT Press, Cambridge, MA, USA, 15th Annual Neural Information Processing Systems Conference (NIPS), September 2002 (inproceedings)

Abstract
We propose randomized techniques for speeding up Kernel Principal Component Analysis on three levels: sampling and quantization of the Gram matrix in training, randomized rounding in evaluating the kernel expansions, and random projections in evaluating the kernel itself. In all three cases, we give sharp bounds on the accuracy of the obtained approximations.

PDF Web [BibTex]

PDF Web [BibTex]


no image
The Infinite Hidden Markov Model

Beal, MJ., Ghahramani, Z., Rasmussen, CE.

In Advances in Neural Information Processing Systems 14, pages: 577-584, (Editors: Dietterich, T.G. , S. Becker, Z. Ghahramani), MIT Press, Cambridge, MA, USA, Fifteenth Annual Neural Information Processing Systems Conference (NIPS), September 2002 (inproceedings)

Abstract
We show that it is possible to extend hidden Markov models to have a countably infinite number of hidden states. By using the theory of Dirichlet processes we can implicitly integrate out the infinitely many transition parameters, leaving only three hyperparameters which can be learned from data. These three hyperparameters define a hierarchical Dirichlet process capable of capturing a rich set of transition dynamics. The three hyperparameters control the time scale of the dynamics, the sparsity of the underlying state-transition matrix, and the expected number of distinct hidden states in a finite sequence. In this framework it is also natural to allow the alphabet of emitted symbols to be infinite - consider, for example, symbols being possible words appearing in English text.

PDF Web [BibTex]

PDF Web [BibTex]


no image
A new discriminative kernel from probabilistic models

Tsuda, K., Kawanabe, M., Rätsch, G., Sonnenburg, S., Müller, K.

In Advances in Neural Information Processing Systems 14, pages: 977-984, (Editors: Dietterich, T.G. , S. Becker, Z. Ghahramani), MIT Press, Cambridge, MA, USA, Fifteenth Annual Neural Information Processing Systems Conference (NIPS), September 2002 (inproceedings)

Abstract
Recently, Jaakkola and Haussler proposed a method for constructing kernel functions from probabilistic models. Their so called \Fisher kernel" has been combined with discriminative classi ers such as SVM and applied successfully in e.g. DNA and protein analysis. Whereas the Fisher kernel (FK) is calculated from the marginal log-likelihood, we propose the TOP kernel derived from Tangent vectors Of Posterior log-odds. Furthermore, we develop a theoretical framework on feature extractors from probabilistic models and use it for analyzing the TOP kernel. In experiments our new discriminative TOP kernel compares favorably to the Fisher kernel.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Incorporating Invariances in Non-Linear Support Vector Machines

Chapelle, O., Schölkopf, B.

In Advances in Neural Information Processing Systems 14, pages: 609-616, (Editors: TG Dietterich and S Becker and Z Ghahramani), MIT Press, Cambridge, MA, USA, 15th Annual Neural Information Processing Systems Conference (NIPS), September 2002 (inproceedings)

Abstract
The choice of an SVM kernel corresponds to the choice of a representation of the data in a feature space and, to improve performance, it should therefore incorporate prior knowledge such as known transformation invariances. We propose a technique which extends earlier work and aims at incorporating invariances in nonlinear kernels. We show on a digit recognition task that the proposed approach is superior to the Virtual Support Vector method, which previously had been the method of choice.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Kernel feature spaces and nonlinear blind source separation

Harmeling, S., Ziehe, A., Kawanabe, M., Müller, K.

In Advances in Neural Information Processing Systems 14, pages: 761-768, (Editors: Dietterich, T. G., S. Becker, Z. Ghahramani), MIT Press, Cambridge, MA, USA, Fifteenth Annual Neural Information Processing Systems Conference (NIPS), September 2002 (inproceedings)

Abstract
In kernel based learning the data is mapped to a kernel feature space of a dimension that corresponds to the number of training data points. In practice, however, the data forms a smaller submanifold in feature space, a fact that has been used e.g. by reduced set techniques for SVMs. We propose a new mathematical construction that permits to adapt to the intrinsic dimension and to find an orthonormal basis of this submanifold. In doing so, computations get much simpler and more important our theoretical framework allows to derive elegant kernelized blind source separation (BSS) algorithms for arbitrary invertible nonlinear mixings. Experiments demonstrate the good performance and high computational efficiency of our kTDSEP algorithm for the problem of nonlinear BSS.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Algorithms for Learning Function Distinguishable Regular Languages

Fernau, H., Radl, A.

In Joint IAPR International Workshop on Structural, Syntactic, and Statistical Pattern Recognition, pages: 64-73, (Editors: Caelli, T. , A. Amin, R. P.W. Duin, M. Kamel, D. de Ridder), Springer, Berlin, Germany, Joint IAPR International Workshop on Structural, Syntactic, and Statistical Pattern Recognition, August 2002 (inproceedings)

Abstract
Function distinguishable languages were introduced as a new methodology of defining characterizable subclasses of the regular languages which are learnable from text. Here, we give details on the implementation and the analysis of the corresponding learning algorithms. We also discuss problems which might occur in practical applications.

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Decision Boundary Pattern Selection for Support Vector Machines

Shin, H., Cho, S.

In Proc. of the Korean Data Mining Conference, pages: 33-41, Korean Data Mining Conference, May 2002 (inproceedings)

[BibTex]

[BibTex]


no image
k-NN based Pattern Selection for Support Vector Classifiers

Shin, H., Cho, S.

In Proc. of the Korean Industrial Engineers Conference, pages: 645-651, Korean Industrial Engineers Conference, May 2002 (inproceedings)

[BibTex]

[BibTex]


no image
Microarrays: How Many Do You Need?

Zien, A., Fluck, J., Zimmer, R., Lengauer, T.

In RECOMB 2002, pages: 321-330, ACM Press, New York, NY, USA, Sixth Annual International Conference on Research in Computational Molecular Biology, April 2002 (inproceedings)

Abstract
We estimate the number of microarrays that is required in order to gain reliable results from a common type of study: the pairwise comparison of different classes of samples. Current knowlegde seems to suffice for the construction of models that are realistic with respect to searches for individual differentially expressed genes. Such models allow to investigate the dependence of the required number of samples on the relevant parameters: the biological variability of the samples within each class; the fold changes in expression; the detection sensitivity of the microarrays; and the acceptable error rates of the results. We supply experimentalists with general conclusions as well as a freely accessible Java applet at http://cartan.gmd.de/~zien/classsize/ for fine tuning simulations to their particular actualities. Since the situation can be assumed to be very similar for large scale proteomics and metabolomics studies, our methods and results might also apply there.

Web DOI [BibTex]

Web DOI [BibTex]


no image
Pattern Selection for Support Vector Classifiers

Shin, H., Cho, S.

In Ideal 2002, pages: 97-103, (Editors: Yin, H. , N. Allinson, R. Freeman, J. Keane, S. Hubbard), Springer, Berlin, Germany, Third International Conference on Intelligent Data Engineering and Automated Learning, January 2002 (inproceedings)

Abstract
SVMs tend to take a very long time to train with a large data set. If "redundant" patterns are identified and deleted in pre-processing, the training time could be reduced significantly. We propose a k-nearest neighbors(k-NN) based pattern selection method. The method tries to select the patterns that are near the decision boundary and that are correctly labeled. The simulations over synthetic data sets showed promising results: (1) By converting a non-separable problem to a separable one, the search for an optimal error tolerance parameter became unnecessary. (2) SVM training time decreased by two orders of magnitude without any loss of accuracy. (3) The redundant SVs were substantially reduced.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
The leave-one-out kernel

Tsuda, K., Kawanabe, M.

In Artificial Neural Networks -- ICANN 2002, 2415, pages: 727-732, LNCS, (Editors: Dorronsoro, J. R.), Artificial Neural Networks -- ICANN, 2002 (inproceedings)

PDF [BibTex]

PDF [BibTex]


no image
Localized Rademacher Complexities

Bartlett, P., Bousquet, O., Mendelson, S.

In Proceedings of the 15th annual conference on Computational Learning Theory, pages: 44-58, Proceedings of the 15th annual conference on Computational Learning Theory, 2002 (inproceedings)

Abstract
We investigate the behaviour of global and local Rademacher averages. We present new error bounds which are based on the local averages and indicate how data-dependent local averages can be estimated without {it a priori} knowledge of the class at hand.

PDF PostScript [BibTex]

PDF PostScript [BibTex]


no image
Film Cooling: A Comparative Study of Different Heaterfoil Configurations for Liquid Crystals Experiments

Vogel, G., Graf, ABA., Weigand, B.

In ASME TURBO EXPO 2002, Amsterdam, GT-2002-30552, ASME TURBO EXPO, Amsterdam, 2002 (inproceedings)

PDF [BibTex]

PDF [BibTex]


no image
Some Local Measures of Complexity of Convex Hulls and Generalization Bounds

Bousquet, O., Koltchinskii, V., Panchenko, D.

In Proceedings of the 15th annual conference on Computational Learning Theory, Proceedings of the 15th annual conference on Computational Learning Theory, 2002 (inproceedings)

Abstract
We investigate measures of complexity of function classes based on continuity moduli of Gaussian and Rademacher processes. For Gaussian processes, we obtain bounds on the continuity modulus on the convex hull of a function class in terms of the same quantity for the class itself. We also obtain new bounds on generalization error in terms of localized Rademacher complexities. This allows us to prove new results about generalization performance for convex hulls in terms of characteristics of the base class. As a byproduct, we obtain a simple proof of some of the known bounds on the entropy of convex hulls.

PDF PostScript [BibTex]

PDF PostScript [BibTex]


no image
A kernel approach for learning from almost orthogonal patterns

Schölkopf, B., Weston, J., Eskin, E., Leslie, C., Noble, W.

In Principles of Data Mining and Knowledge Discovery, Lecture Notes in Computer Science, 2430/2431, pages: 511-528, Lecture Notes in Computer Science, (Editors: T Elomaa and H Mannila and H Toivonen), Springer, Berlin, Germany, 13th European Conference on Machine Learning (ECML) and 6th European Conference on Principles and Practice of Knowledge Discovery in Databases (PKDD'2002), 2002 (inproceedings)

PostScript DOI [BibTex]

PostScript DOI [BibTex]


no image
Infinite Mixtures of Gaussian Process Experts

Rasmussen, CE., Ghahramani, Z.

In (Editors: Dietterich, Thomas G.; Becker, Suzanna; Ghahramani, Zoubin), 2002 (inproceedings)

Abstract
We present an extension to the Mixture of Experts (ME) model, where the individual experts are Gaussian Process (GP) regression models. Using a input-dependent adaptation of the Dirichlet Process, we implement a gating network for an infinite number of Experts. Inference in this model may be done efficiently using a Markov Chain relying on Gibbs sampling. The model allows the effective covariance function to vary with the inputs, and may handle large datasets -- thus potentially overcoming two of the biggest hurdles with GP models. Simulations show the viability of this approach.

PDF PostScript [BibTex]

PDF PostScript [BibTex]


no image
Marginalized kernels for RNA sequence data analysis

Kin, T., Tsuda, K., Asai, K.

In Genome Informatics 2002, pages: 112-122, (Editors: Lathtop, R. H.; Nakai, K.; Miyano, S.; Takagi, T.; Kanehisa, M.), Genome Informatics, 2002, (Best Paper Award) (inproceedings)

Web [BibTex]

Web [BibTex]


no image
Luminance Artifacts on CRT Displays

Wichmann, F.

In IEEE Visualization, pages: 571-574, (Editors: Moorhead, R.; Gross, M.; Joy, K. I.), IEEE Visualization, 2002 (inproceedings)

Abstract
Most visualization panels today are still built around cathode-ray tubes (CRTs), certainly on personal desktops at work and at home. Whilst capable of producing pleasing images for common applications ranging from email writing to TV and DVD presentation, it is as well to note that there are a number of nonlinear transformations between input (voltage) and output (luminance) which distort the digital and/or analogue images send to a CRT. Some of them are input-independent and hence easy to fix, e.g. gamma correction, but others, such as pixel interactions, depend on the content of the input stimulus and are thus harder to compensate for. CRT-induced image distortions cause problems not only in basic vision research but also for applications where image fidelity is critical, most notably in medicine (digitization of X-ray images for diagnostic purposes) and in forms of online commerce, such as the online sale of images, where the image must be reproduced on some output device which will not have the same transfer function as the customer's CRT. I will present measurements from a number of CRTs and illustrate how some of their shortcomings may be problematic for the aforementioned applications.

[BibTex]

[BibTex]

1997


no image
The view-graph approach to visual navigation and spatial memory

Mallot, H., Franz, M., Schölkopf, B., Bülthoff, H.

In Artificial Neural Networks: ICANN ’97, pages: 751-756, (Editors: W Gerstner and A Germond and M Hasler and J-D Nicoud), Springer, Berlin, Germany, 7th International Conference on Artificial Neural Networks, October 1997 (inproceedings)

Abstract
This paper describes a purely visual navigation scheme based on two elementary mechanisms (piloting and guidance) and a graph structure combining individual navigation steps controlled by these mechanisms. In robot experiments in real environments, both mechanisms have been tested, piloting in an open environment and guidance in a maze with restricted movement opportunities. The results indicate that navigation and path planning can be brought about with these simple mechanisms. We argue that the graph of local views (snapshots) is a general and biologically plausible means of representing space and integrating the various mechanisms of map behaviour.

PDF PDF DOI [BibTex]

1997

PDF PDF DOI [BibTex]


no image
Predicting time series with support vector machines

Müller, K., Smola, A., Rätsch, G., Schölkopf, B., Kohlmorgen, J., Vapnik, V.

In Artificial Neural Networks: ICANN’97, pages: 999-1004, (Editors: Schölkopf, B. , C.J.C. Burges, A.J. Smola), Springer, Berlin, Germany, 7th International Conference on Artificial Neural Networks , October 1997 (inproceedings)

Abstract
Support Vector Machines are used for time series prediction and compared to radial basis function networks. We make use of two different cost functions for Support Vectors: training with (i) an e insensitive loss and (ii) Huber's robust loss function and discuss how to choose the regularization parameters in these models. Two applications are considered: data from (a) a noisy (normal and uniform noise) Mackey Glass equation and (b) the Santa Fe competition (set D). In both cases Support Vector Machines show an excellent performance. In case (b) the Support Vector approach improves the best known result on the benchmark by a factor of 29%.

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Predicting time series with support vectur machines

Müller, K., Smola, A., Rätsch, G., Schölkopf, B., Kohlmorgen, J., Vapnik, V.

In Artificial neural networks: ICANN ’97, pages: 999-1004, (Editors: W Gerstner and A Germond and M Hasler and J-D Nicoud), Springer, Berlin, Germany, 7th International Conference on Artificial Neural Networks , October 1997 (inproceedings)

Abstract
Support Vector Machines are used for time series prediction and compared to radial basis function networks. We make use of two different cost functions for Support Vectors: training with (i) an e insensitive loss and (ii) Huber's robust loss function and discuss how to choose the regularization parameters in these models. Two applications are considered: data from (a) a noisy (normal and uniform noise) Mackey Glass equation and (b) the Santa Fe competition (set D). In both cases Support Vector Machines show an excellent performance. In case (b) the Support Vector approach improves the best known result on the benchmark by a factor of 29%.

PDF PDF DOI [BibTex]

PDF PDF DOI [BibTex]


no image
Kernel principal component analysis

Schölkopf, B., Smola, A., Müller, K.

In Artificial neural networks: ICANN ’97, LNCS, vol. 1327, pages: 583-588, (Editors: W Gerstner and A Germond and M Hasler and J-D Nicoud), Springer, Berlin, Germany, 7th International Conference on Artificial Neural Networks, October 1997 (inproceedings)

Abstract
A new method for performing a nonlinear form of Principal Component Analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in highdimensional feature spaces, related to input space by some nonlinear map; for instance the space of all possible d-pixel products in images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Homing by parameterized scene matching

Franz, M., Schölkopf, B., Bülthoff, H.

In Proceedings of the 4th European Conference on Artificial Life, (Eds.) P. Husbands, I. Harvey. MIT Press, Cambridge 1997, pages: 236-245, (Editors: P Husbands and I Harvey), MIT Press, Cambridge, MA, USA, 4th European Conference on Artificial Life (ECAL97), July 1997 (inproceedings)

Abstract
In visual homing tasks, animals as well as robots can compute their movements from the current view and a snapshot taken at a home position. Solving this problem exactly would require knowledge about the distances to visible landmarks, information, which is not directly available to passive vision systems. We propose a homing scheme that dispenses with accurate distance information by using parameterized disparity fields. These are obtained from an approximation that incorporates prior knowledge about perspective distortions of the visual environment. A mathematical analysis proves that the approximation does not prevent the scheme from approaching the goal with arbitrary accuracy. Mobile robot experiments are used to demonstrate the practical feasibility of the approach.

PDF [BibTex]

PDF [BibTex]


no image
Improving the accuracy and speed of support vector learning machines

Burges, C., Schölkopf, B.

In Advances in Neural Information Processing Systems 9, pages: 375-381, (Editors: M Mozer and MJ Jordan and T Petsche), MIT Press, Cambridge, MA, USA, Tenth Annual Conference on Neural Information Processing Systems (NIPS), May 1997 (inproceedings)

Abstract
Support Vector Learning Machines (SVM) are finding application in pattern recognition, regression estimation, and operator inversion for illposed problems . Against this very general backdrop any methods for improving the generalization performance, or for improving the speed in test phase of SVMs are of increasing interest. In this paper we combine two such techniques on a pattern recognition problem The method for improving generalization performance the "virtual support vector" method does so by incorporating known invariances of the problem This method achieves a drop in the error rate on 10.000 NIST test digit images of 1,4 % to 1 %. The method for improving the speed (the "reduced set" method) does so by approximating the support vector decision surface. We apply this method to achieve a factor of fifty speedup in test phase over the virtual support vector machine The combined approach yields a machine which is both 22 times faster than the original machine, and which has better generalization performance achieving 1,1 % error . The virtual support vector method is applicable to any SVM problem with known invariances The reduced set method is applicable to any support vector machine .

PDF Web [BibTex]

PDF Web [BibTex]


no image
Learning view graphs for robot navigation

Franz, M., Schölkopf, B., Georg, P., Mallot, H., Bülthoff, H.

In Proceedings of the 1st Intl. Conf. on Autonomous Agents, pages: 138-147, (Editors: Johnson, W.L.), ACM Press, New York, NY, USA, First International Conference on Autonomous Agents (AGENTS '97), Febuary 1997 (inproceedings)

Abstract
We present a purely vision-based scheme for learning a parsimonious representation of an open environment. Using simple exploration behaviours, our system constructs a graph of appropriately chosen views. To navigate between views connected in the graph, we employ a homing strategy inspired by findings of insect ethology. Simulations and robot experiments demonstrate the feasibility of the proposed approach.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]

1995


no image
View-based cognitive map learning by an autonomous robot

Mallot, H., Bülthoff, H., Georg, P., Schölkopf, B., Yasuhara, K.

In Proceedings International Conference on Artificial Neural Networks, vol. 2, pages: 381-386, (Editors: Fogelman-Soulié, F.), EC2, Paris, France, Conférence Internationale sur les Réseaux de Neurones Artificiels (ICANN '95), October 1995 (inproceedings)

Abstract
This paper presents a view-based approach to map learning and navigation in mazes. By means of graph theory we have shown that the view-graph is a sufficient representation for map behaviour such as path planning. A neural network for unsupervised learning of the view-graph from sequences of views is constructed. We use a modified Kohonen (1988) learning rule that transforms temporal sequence (rather than featural similarity) into connectedness. In the main part of the paper, we present a robot implementation of the scheme. The results show that the proposed network is able to support map behaviour in simple environments.

PDF [BibTex]

1995

PDF [BibTex]


no image
Extracting support data for a given task

Schölkopf, B., Burges, C., Vapnik, V.

In First International Conference on Knowledge Discovery & Data Mining (KDD-95), pages: 252-257, (Editors: UM Fayyad and R Uthurusamy), AAAI Press, Menlo Park, CA, USA, August 1995 (inproceedings)

Abstract
We report a novel possibility for extracting a small subset of a data base which contains all the information necessary to solve a given classification task: using the Support Vector Algorithm to train three different types of handwritten digit classifiers, we observed that these types of classifiers construct their decision surface from strongly overlapping small (k: 4%) subsets of the data base. This finding opens up the possibiiity of compressing data bases significantly by disposing of the data which is not important for the solution of a given task. In addition, we show that the theory allows us to predict the classifier that will have the best generalization ability, based solely on performance on the training set and characteristics of the learning machines. This finding is important for cases where the amount of available data is limited.

PDF [BibTex]

PDF [BibTex]