Header logo is ei


2000


no image
DES Approach Failure Diagnosis of Pump-valve System

Son, HI., Kim, KW., Lee, S.

In Korean Society of Precision Engineering (KSPE) Conference, pages: 643-646, Annual Meeting of the Korean Society of Precision Engineering (KSPE), October 2000 (inproceedings)

Abstract
As many industrial systems become more complex, it becomes extremely difficult to diagnose the cause of failures. This paper presents a failure diagnosis approach based on discrete event system theory. In particular, the approach is a hybrid of event-based and state-based ones leading to a simpler failure diagnoser with supervisory control capability. The design procedure is presented along with a pump-valve system as an example.

PDF [BibTex]

2000

PDF [BibTex]


no image
Engineering Support Vector Machine Kernels That Recognize Translation Initiation Sites

Zien, A., Rätsch, G., Mika, S., Schölkopf, B., Lengauer, T., Müller, K.

Bioinformatics, 16(9):799-807, September 2000 (article)

Abstract
Motivation: In order to extract protein sequences from nucleotide sequences, it is an important step to recognize points at which regions start that code for proteins. These points are called translation initiation sites (TIS). Results: The task of finding TIS can be modeled as a classification problem. We demonstrate the applicability of support vector machines for this task, and show how to incorporate prior biological knowledge by engineering an appropriate kernel function. With the described techniques the recognition performance can be improved by 26% over leading existing approaches. We provide evidence that existing related methods (e.g. ESTScan) could profit from advanced TIS recognition.

Web DOI [BibTex]

Web DOI [BibTex]


no image
Three-dimensional reconstruction of planar scenes

Urbanek, M.

Biologische Kybernetik, INP Grenoble, Warsaw University of Technology, September 2000 (diplomathesis)

Abstract
For a planar scene, we propose an algorithm to estimate its 3D structure. Homographies between corresponding planes are employed in order to recover camera motion parameters - between camera positions from which images of the scene were taken. Cases of one- and multiple- corresponding planes present on the scene are distinguished. Solutions are proposed for both cases.

ZIP [BibTex]

ZIP [BibTex]


no image
Analysis of Gene Expression Data with Pathway Scores

Zien, A., Küffner, R., Zimmer, R., Lengauer, T.

In ISMB 2000, pages: 407-417, AAAI Press, Menlo Park, CA, USA, 8th International Conference on Intelligent Systems for Molecular Biology, August 2000 (inproceedings)

Abstract
We present a new approach for the evaluation of gene expression data. The basic idea is to generate biologically possible pathways and to score them with respect to gene expression measurements. We suggest sample scoring functions for different problem specifications. The significance of the scores for the investigated pathways is assessed by comparison to a number of scores for random pathways. We show that simple scoring functions can assign statistically significant scores to biologically relevant pathways. This suggests that the combination of appropriate scoring functions with the systematic generation of pathways can be used in order to select the most interesting pathways based on gene expression measurements.

PDF [BibTex]

PDF [BibTex]


no image
A Meanfield Approach to the Thermodynamics of a Protein-Solvent System with Application to the Oligomerization of the Tumour Suppressor p53.

Noolandi, J., Davison, TS., Vokel, A., Nie, F., Kay, C., Arrowsmith, C.

Proceedings of the National Academy of Sciences of the United States of America, 97(18):9955-9960, August 2000 (article)

Web [BibTex]

Web [BibTex]


no image
Observational Learning with Modular Networks

Shin, H., Lee, H., Cho, S.

In Lecture Notes in Computer Science (LNCS 1983), LNCS 1983, pages: 126-132, Springer-Verlag, Heidelberg, International Conference on Intelligent Data Engineering and Automated Learning (IDEAL), July 2000 (inproceedings)

Abstract
Observational learning algorithm is an ensemble algorithm where each network is initially trained with a bootstrapped data set and virtual data are generated from the ensemble for training. Here we propose a modular OLA approach where the original training set is partitioned into clusters and then each network is instead trained with one of the clusters. Networks are combined with different weighting factors now that are inversely proportional to the distance from the input vector to the cluster centers. Comparison with bagging and boosting shows that the proposed approach reduces generalization error with a smaller number of networks employed.

PDF [BibTex]

PDF [BibTex]


no image
The Infinite Gaussian Mixture Model

Rasmussen, CE.

In Advances in Neural Information Processing Systems 12, pages: 554-560, (Editors: Solla, S.A. , T.K. Leen, K-R Müller), MIT Press, Cambridge, MA, USA, Thirteenth Annual Neural Information Processing Systems Conference (NIPS), June 2000 (inproceedings)

Abstract
In a Bayesian mixture model it is not necessary a priori to limit the number of components to be finite. In this paper an infinite Gaussian mixture model is presented which neatly sidesteps the difficult problem of finding the ``right'' number of mixture components. Inference in the model is done using an efficient parameter-free Markov Chain that relies entirely on Gibbs sampling.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Generalization Abilities of Ensemble Learning Algorithms

Shin, H., Jang, M., Cho, S.

In Proc. of the Korean Brain Society Conference, pages: 129-133, Korean Brain Society Conference, June 2000 (inproceedings)

[BibTex]

[BibTex]


no image
Support vector method for novelty detection

Schölkopf, B., Williamson, R., Smola, A., Shawe-Taylor, J., Platt, J.

In Advances in Neural Information Processing Systems 12, pages: 582-588, (Editors: SA Solla and TK Leen and K-R Müller), MIT Press, Cambridge, MA, USA, 13th Annual Neural Information Processing Systems Conference (NIPS), June 2000 (inproceedings)

Abstract
Suppose you are given some dataset drawn from an underlying probability distribution ¤ and you want to estimate a “simple” subset ¥ of input space such that the probability that a test point drawn from ¤ lies outside of ¥ equals some a priori specified ¦ between § and ¨. We propose a method to approach this problem by trying to estimate a function © which is positive on ¥ and negative on the complement. The functional form of © is given by a kernel expansion in terms of a potentially small subset of the training data; it is regularized by controlling the length of the weight vector in an associated feature space. We provide a theoretical analysis of the statistical performance of our algorithm. The algorithm is a natural extension of the support vector algorithm to the case of unlabelled data.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Solving Satisfiability Problems with Genetic Algorithms

Harmeling, S.

In Genetic Algorithms and Genetic Programming at Stanford 2000, pages: 206-213, (Editors: Koza, J. R.), Stanford Bookstore, Stanford, CA, USA, June 2000 (inbook)

Abstract
We show how to solve hard 3-SAT problems using genetic algorithms. Furthermore, we explore other genetic operators that may be useful to tackle 3-SAT problems, and discuss their pros and cons.

PDF [BibTex]

PDF [BibTex]


no image
v-Arc: Ensemble Learning in the Presence of Outliers

Rätsch, G., Schölkopf, B., Smola, A., Müller, K., Onoda, T., Mika, S.

In Advances in Neural Information Processing Systems 12, pages: 561-567, (Editors: SA Solla and TK Leen and K-R Müller), MIT Press, Cambridge, MA, USA, 13th Annual Neural Information Processing Systems Conference (NIPS), June 2000 (inproceedings)

Abstract
AdaBoost and other ensemble methods have successfully been applied to a number of classification tasks, seemingly defying problems of overfitting. AdaBoost performs gradient descent in an error function with respect to the margin, asymptotically concentrating on the patterns which are hardest to learn. For very noisy problems, however, this can be disadvantageous. Indeed, theoretical analysis has shown that the margin distribution, as opposed to just the minimal margin, plays a crucial role in understanding this phenomenon. Loosely speaking, some outliers should be tolerated if this has the benefit of substantially increasing the margin on the remaining points. We propose a new boosting algorithm which allows for the possibility of a pre-specified fraction of points to lie in the margin area or even on the wrong side of the decision boundary.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Invariant feature extraction and classification in kernel spaces

Mika, S., Rätsch, G., Weston, J., Schölkopf, B., Smola, A., Müller, K.

In Advances in neural information processing systems 12, pages: 526-532, (Editors: SA Solla and TK Leen and K-R Müller), MIT Press, Cambridge, MA, USA, 13th Annual Neural Information Processing Systems Conference (NIPS), June 2000 (inproceedings)

PDF Web [BibTex]

PDF Web [BibTex]


no image
Transductive Inference for Estimating Values of Functions

Chapelle, O., Vapnik, V., Weston, J.

In Advances in Neural Information Processing Systems 12, pages: 421-427, (Editors: Solla, S.A. , T.K. Leen, K-R Müller), MIT Press, Cambridge, MA, USA, Thirteenth Annual Neural Information Processing Systems Conference (NIPS), June 2000 (inproceedings)

Abstract
We introduce an algorithm for estimating the values of a function at a set of test points $x_1^*,dots,x^*_m$ given a set of training points $(x_1,y_1),dots,(x_ell,y_ell)$ without estimating (as an intermediate step) the regression function. We demonstrate that this direct (transductive) way for estimating values of the regression (or classification in pattern recognition) is more accurate than the traditional one based on two steps, first estimating the function and then calculating the values of this function at the points of interest.

PDF Web [BibTex]

PDF Web [BibTex]


no image
The entropy regularization information criterion

Smola, A., Shawe-Taylor, J., Schölkopf, B., Williamson, R.

In Advances in Neural Information Processing Systems 12, pages: 342-348, (Editors: SA Solla and TK Leen and K-R Müller), MIT Press, Cambridge, MA, USA, 13th Annual Neural Information Processing Systems Conference (NIPS), June 2000 (inproceedings)

Abstract
Effective methods of capacity control via uniform convergence bounds for function expansions have been largely limited to Support Vector machines, where good bounds are obtainable by the entropy number approach. We extend these methods to systems with expansions in terms of arbitrary (parametrized) basis functions and a wide range of regularization methods covering the whole range of general linear additive models. This is achieved by a data dependent analysis of the eigenvalues of the corresponding design matrix.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Model Selection for Support Vector Machines

Chapelle, O., Vapnik, V.

In Advances in Neural Information Processing Systems 12, pages: 230-236, (Editors: Solla, S.A. , T.K. Leen, K-R Müller), MIT Press, Cambridge, MA, USA, Thirteenth Annual Neural Information Processing Systems Conference (NIPS), June 2000 (inproceedings)

Abstract
New functionals for parameter (model) selection of Support Vector Machines are introduced based on the concepts of the span of support vectors and rescaling of the feature space. It is shown that using these functionals, one can both predict the best choice of parameters of the model and the relative quality of performance for any value of parameter.

PDF Web [BibTex]

PDF Web [BibTex]


no image
New Support Vector Algorithms

Schölkopf, B., Smola, A., Williamson, R., Bartlett, P.

Neural Computation, 12(5):1207-1245, May 2000 (article)

Abstract
We propose a new class of support vector algorithms for regression and classification. In these algorithms, a parameter {nu} lets one effectively control the number of support vectors. While this can be useful in its own right, the parameterization has the additional benefit of enabling us to eliminate one of the other free parameters of the algorithm: the accuracy parameter {epsilon} in the regression case, and the regularization constant C in the classification case. We describe the algorithms, give some theoretical results concerning the meaning and the choice of {nu}, and report experimental results.

Web DOI [BibTex]

Web DOI [BibTex]


no image
Generalization Abilities of Ensemble Learning Algorithms: OLA, Bagging, Boosting

Shin, H., Jang, M., Cho, S., Lee, B., Lim, Y.

In Proc. of the Korea Information Science Conference, pages: 226-228, Conference on Korean Information Science, April 2000 (inproceedings)

[BibTex]

[BibTex]


no image
A simple iterative approach to parameter optimization

Zien, A., Zimmer, R., Lengauer, T.

In RECOMB2000, pages: 318-327, ACM Press, New York, NY, USA, Forth Annual Conference on Research in Computational Molecular Biology, April 2000 (inproceedings)

Abstract
Various bioinformatics problems require optimizing several different properties simultaneously. For example, in the protein threading problem, a linear scoring function combines the values for different properties of possible sequence-to-structure alignments into a single score to allow for unambigous optimization. In this context, an essential question is how each property should be weighted. As the native structures are known for some sequences, the implied partial ordering on optimal alignments may be used to adjust the weights. To resolve the arising interdependence of weights and computed solutions, we propose a novel approach: iterating the computation of solutions (here: threading alignments) given the weights and the estimation of optimal weights of the scoring function given these solutions via a systematic calibration method. We show that this procedure converges to structurally meaningful weights, that also lead to significantly improved performance on comprehensive test data sets as measured in different ways. The latter indicates that the performance of threading can be improved in general.

Web DOI [BibTex]

Web DOI [BibTex]


no image
Contrast discrimination using periodic pulse trains

Wichmann, F., Henning, G.

pages: 74, 3. T{\"u}binger Wahrnehmungskonferenz (TWK), February 2000 (poster)

Abstract
Understanding contrast transduction is essential for understanding spatial vision. Previous research (Wichmann et al. 1998; Wichmann, 1999; Henning and Wichmann, 1999) has demonstrated the importance of high contrasts to distinguish between alternative models of contrast discrimination. However, the modulation transfer function of the eye imposes large contrast losses on stimuli, particularly for stimuli of high spatial frequency, making high retinal contrasts difficult to obtain using sinusoidal gratings. Standard 2AFC contrast discrimination experiments were conducted using periodic pulse trains as stimuli. Given our Mitsubishi display we achieve stimuli with up to 160% contrast at the fundamental frequency. The shape of the threshold versus (pedestal) contrast (TvC) curve using pulse trains shows the characteristic dipper shape, i.e. contrast discrimination is sometimes “easier” than detection. The rising part of the TvC function has the same slope as that measured for contrast discrimination using sinusoidal gratings of the same frequency as the fundamental. Periodic pulse trains offer the possibility to explore the visual system’s properties using high retinal contrasts. Thus they might prove useful in tasks other than contrast discrimination. Second, at least for high spatial frequencies (8 c/deg) it appears that contrast discrimination using sinusoids and periodic pulse trains results in virtually identical TvC functions, indicating a lack of probability summation. Further implications of these results are discussed.

Web [BibTex]

Web [BibTex]


no image
Subliminale Darbietung verkehrsrelevanter Information in Kraftfahrzeugen

Staedtgen, M., Hahn, S., Franz, MO., Spitzer, M.

pages: 98, (Editors: H.H. Bülthoff, K.R. Gegenfurtner, H.A. Mallot), 3. T{\"u}binger Wahrnehmungskonferenz (TWK), February 2000 (poster)

Abstract
Durch moderne Bildverarbeitungstechnologien ist es m{\"o}glich, in Kraftfahrzeugen bestimmte kritische Verkehrssituationen automatisch zu erkennen und den Fahrer zu warnen bzw. zu informieren. Ein Problem ist dabei die Darbietung der Ergebnisse, die den Fahrer m{\"o}glichst wenig belasten und seine Aufmerksamkeit nicht durch zus{\"a}tzliche Warnleuchten oder akustische Signale vom Verkehrsgeschehen ablenken soll. In einer Reihe von Experimenten wurde deshalb untersucht, ob subliminal dargebotene, das heißt nicht bewußt wahrgenommene, verkehrsrelevante Informationen verhaltenswirksam werden und zur Informations{\"u}bermittlung an den Fahrer genutzt werden k{\"o}nnen. In einem Experiment zur semantischen Bahnung konnte mit Hilfe einer lexikalischen Entscheidungsaufgabe gezeigt werden, daß auf den Straßenverkehr bezogene Worte schneller verarbeitet werden, wenn vorher ein damit in Zusammenhang stehendes Bild eines Verkehrsschildes subliminal pr{\"a}sentiert wurde. Auch bei parafovealer Darbietung der subliminalen Stimuli wurde eine Beschleunigung erzielt. In einer visuellen Suchaufgabe wurden in Bildern realer Verkehrssituationen Verkehrszeichen schneller entdeckt, wenn das Bild des Verkehrszeichens vorher subliminal dargeboten wurde. In beiden Experimenten betrug die Pr{\"a}sentationszeit f{\"u}r die Hinweisreize 17 ms, zus{\"a}tzlich wurde durch Vorw{\"a}rts- und R{\"u}ckw{\"a}rtsmaskierung die bewußteWahrnehmung verhindert. Diese Laboruntersuchungen zeigten, daß sich auch im Kontext des Straßenverkehrs Beschleunigungen der Informationsverarbeitung durch subliminal dargebotene Stimuli erreichen lassen. In einem dritten Experiment wurde die Darbietung eines subliminalen Hinweisreizes auf die Reaktionszeit beim Bremsen in einem realen Fahrversuch untersucht. Die Versuchspersonen (n=17) sollten so schnell wie m{\"o}glich bremsen, wenn die Bremsleuchten eines im Abstand von 12-15 m voran fahrenden Fahrzeuges aufleuchteten. In 50 von insgesamt 100 Durchg{\"a}ngen wurde ein subliminaler Stimulus (zwei rote Punkte mit einem Zentimeter Durchmesser und zehn Zentimeter Abstand) 150 ms vor Aufleuchten der Bremslichter pr{\"a}sentiert. Die Darbietung erfolgte durch ein im Auto an Stelle des Tachometers integriertes TFT-LCD Display. Im Vergleich zur Reaktion ohne subliminalen Stimulus verk{\"u}rzte sich die Reaktionszeit dadurch signifikant um 51 ms. In den beschriebenen Experimenten konnte gezeigt werden, daß die subliminale Darbietung verkehrsrelevanter Information auch in Kraftfahrzeugen verhaltenswirksam werden kann. In Zukunft k{\"o}nnte durch die Kombination der online-Bildverarbeitung im Kraftfahrzeug mit subliminaler Darbietung der Ergebnisse eine Erh{\"o}hung der Verkehrssicherheit und des Komforts erreicht werden.

Web [BibTex]

Web [BibTex]


no image
Statistical Learning and Kernel Methods

Schölkopf, B.

In CISM Courses and Lectures, International Centre for Mechanical Sciences Vol.431, CISM Courses and Lectures, International Centre for Mechanical Sciences, 431(23):3-24, (Editors: G Della Riccia and H-J Lenz and R Kruse), Springer, Vienna, Data Fusion and Perception, 2000 (inbook)

[BibTex]

[BibTex]


no image
Bounds on Error Expectation for Support Vector Machines

Vapnik, V., Chapelle, O.

Neural Computation, 12(9):2013-2036, 2000 (article)

Abstract
We introduce the concept of span of support vectors (SV) and show that the generalization ability of support vector machines (SVM) depends on this new geometrical concept. We prove that the value of the span is always smaller (and can be much smaller) than the diameter of the smallest sphere containing th e support vectors, used in previous bounds. We also demonstate experimentally that the prediction of the test error given by the span is very accurate and has direct application in model selection (choice of the optimal parameters of the SVM)

GZIP [BibTex]

GZIP [BibTex]


no image
Intelligence as a Complex System

Zhou, D.

Biologische Kybernetik, 2000 (phdthesis)

[BibTex]

[BibTex]


no image
Neural Networks in Robot Control

Peters, J.

Biologische Kybernetik, Fernuniversität Hagen, Hagen, Germany, 2000 (diplomathesis)

[BibTex]

[BibTex]


no image
Bayesian modelling of fMRI time series

, PADFR., Rasmussen, CE., Hansen, LK.

In pages: 754-760, (Editors: Sara A. Solla, Todd K. Leen and Klaus-Robert Müller), 2000 (inproceedings)

Abstract
We present a Hidden Markov Model (HMM) for inferring the hidden psychological state (or neural activity) during single trial fMRI activation experiments with blocked task paradigms. Inference is based on Bayesian methodology, using a combination of analytical and a variety of Markov Chain Monte Carlo (MCMC) sampling techniques. The advantage of this method is that detection of short time learning effects between repeated trials is possible since inference is based only on single trial experiments.

PDF PostScript [BibTex]

PDF PostScript [BibTex]


no image
An Introduction to Kernel-Based Learning Algorithms

Müller, K., Mika, S., Rätsch, G., Tsuda, K., Schölkopf, B.

In Handbook of Neural Network Signal Processing, 4, (Editors: Yu Hen Hu and Jang-Neng Hwang), CRC Press, 2000 (inbook)

[BibTex]

[BibTex]


no image
Choosing nu in support vector regression with different noise models — theory and experiments

Chalimourda, A., Schölkopf, B., Smola, A.

In Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks, IJCNN 2000, Neural Computing: New Challenges and Perspectives for the New Millennium, IEEE, International Joint Conference on Neural Networks, 2000 (inproceedings)

[BibTex]

[BibTex]


no image
A High Resolution and Accurate Pentium Based Timer

Ong, CS., Wong, F., Lai, WK.

In 2000 (inproceedings)

PDF [BibTex]

PDF [BibTex]


no image
Robust Ensemble Learning for Data Mining

Rätsch, G., Schölkopf, B., Smola, A., Mika, S., Onoda, T., Müller, K.

In Fourth Pacific-Asia Conference on Knowledge Discovery and Data Mining, 1805, pages: 341-341, Lecture Notes in Artificial Intelligence, (Editors: H. Terano), Fourth Pacific-Asia Conference on Knowledge Discovery and Data Mining, 2000 (inproceedings)

[BibTex]

[BibTex]


no image
Sparse greedy matrix approximation for machine learning.

Smola, A., Schölkopf, B.

In 17th International Conference on Machine Learning, Stanford, 2000, pages: 911-918, (Editors: P Langley), Morgan Kaufman, San Fransisco, CA, USA, 17th International Conference on Machine Learning (ICML), 2000 (inproceedings)

[BibTex]

[BibTex]


no image
The Kernel Trick for Distances

Schölkopf, B.

(MSR-TR-2000-51), Microsoft Research, Redmond, WA, USA, 2000 (techreport)

Abstract
A method is described which, like the kernel trick in support vector machines (SVMs), lets us generalize distance-based algorithms to operate in feature spaces, usually nonlinearly related to the input space. This is done by identifying a class of kernels which can be represented as normbased distances in Hilbert spaces. It turns out that common kernel algorithms, such as SVMs and kernel PCA, are actually really distance based algorithms and can be run with that class of kernels, too. As well as providing a useful new insight into how these algorithms work, the present work can form the basis for conceiving new algorithms.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Entropy Numbers of Linear Function Classes.

Williamson, R., Smola, A., Schölkopf, B.

In 13th Annual Conference on Computational Learning Theory, pages: 309-319, (Editors: N Cesa-Bianchi and S Goldman), Morgan Kaufman, San Fransisco, CA, USA, 13th Annual Conference on Computational Learning Theory (COLT), 2000 (inproceedings)

[BibTex]

[BibTex]


no image
Kernel method for percentile feature extraction

Schölkopf, B., Platt, J., Smola, A.

(MSR-TR-2000-22), Microsoft Research, 2000 (techreport)

Abstract
A method is proposed which computes a direction in a dataset such that a speci􏰘ed fraction of a particular class of all examples is separated from the overall mean by a maximal margin􏰤 The pro jector onto that direction can be used for class􏰣speci􏰘c feature extraction􏰤 The algorithm is carried out in a feature space associated with a support vector kernel function􏰢 hence it can be used to construct a large class of nonlinear fea􏰣 ture extractors􏰤 In the particular case where there exists only one class􏰢 the method can be thought of as a robust form of principal component analysis􏰢 where instead of variance we maximize percentile thresholds􏰤 Fi􏰣 nally􏰢 we generalize it to also include the possibility of specifying negative examples􏰤

PDF [BibTex]

PDF [BibTex]

1995


no image
View-based cognitive map learning by an autonomous robot

Mallot, H., Bülthoff, H., Georg, P., Schölkopf, B., Yasuhara, K.

In Proceedings International Conference on Artificial Neural Networks, vol. 2, pages: 381-386, (Editors: Fogelman-Soulié, F.), EC2, Paris, France, Conférence Internationale sur les Réseaux de Neurones Artificiels (ICANN '95), October 1995 (inproceedings)

Abstract
This paper presents a view-based approach to map learning and navigation in mazes. By means of graph theory we have shown that the view-graph is a sufficient representation for map behaviour such as path planning. A neural network for unsupervised learning of the view-graph from sequences of views is constructed. We use a modified Kohonen (1988) learning rule that transforms temporal sequence (rather than featural similarity) into connectedness. In the main part of the paper, we present a robot implementation of the scheme. The results show that the proposed network is able to support map behaviour in simple environments.

PDF [BibTex]

1995

PDF [BibTex]


no image
Extracting support data for a given task

Schölkopf, B., Burges, C., Vapnik, V.

In First International Conference on Knowledge Discovery & Data Mining (KDD-95), pages: 252-257, (Editors: UM Fayyad and R Uthurusamy), AAAI Press, Menlo Park, CA, USA, August 1995 (inproceedings)

Abstract
We report a novel possibility for extracting a small subset of a data base which contains all the information necessary to solve a given classification task: using the Support Vector Algorithm to train three different types of handwritten digit classifiers, we observed that these types of classifiers construct their decision surface from strongly overlapping small (k: 4%) subsets of the data base. This finding opens up the possibiiity of compressing data bases significantly by disposing of the data which is not important for the solution of a given task. In addition, we show that the theory allows us to predict the classifier that will have the best generalization ability, based solely on performance on the training set and characteristics of the learning machines. This finding is important for cases where the amount of available data is limited.

PDF [BibTex]

PDF [BibTex]


no image
View-Based Cognitive Mapping and Path Planning

Schölkopf, B., Mallot, H.

Adaptive Behavior, 3(3):311-348, January 1995 (article)

Abstract
This article presents a scheme for learning a cognitive map of a maze from a sequence of views and movement decisions. The scheme is based on an intermediate representation called the view graph, whose nodes correspond to the views whereas the labeled edges represent the movements leading from one view to another. By means of a graph theoretical reconstruction method, the view graph is shown to carry complete information on the topological and directional structure of the maze. Path planning can be carried out directly in the view graph without actually performing this reconstruction. A neural network is presented that learns the view graph during a random exploration of the maze. It is based on an unsupervised competitive learning rule translating temporal sequence (rather than similarity) of views into connectedness in the network. The network uses its knowledge of the topological and directional structure of the maze to generate expectations about which views are likely to be encountered next, improving the view-recognition performance. Numerical simulations illustrate the network's ability for path planning and the recognition of views degraded by random noise. The results are compared to findings of behavioral neuroscience.

Web DOI [BibTex]

Web DOI [BibTex]


no image
Suppression and creation of chaos in a periodically forced Lorenz system.

Franz, MO., Zhang, MH.

Physical Review, E 52, pages: 3558-3565, 1995 (article)

Abstract
Periodic forcing is introduced into the Lorenz model to study the effects of time-dependent forcing on the behavior of the system. Such a nonautonomous system stays dissipative and has a bounded attracting set which all trajectories finally enter. The possible kinds of attracting sets are restricted to periodic orbits and strange attractors. A large-scale survey of parameter space shows that periodic forcing has mainly three effects in the Lorenz system depending on the forcing frequency: (i) Fixed points are replaced by oscillations around them; (ii) resonant periodic orbits are created both in the stable and the chaotic region; (iii) chaos is created in the stable region near the resonance frequency and in periodic windows. A comparison to other studies shows that part of this behavior has been observed in simulations of higher truncations and real world experiments. Since very small modulations can already have a considerable effect, this suggests that periodic processes such as annual or diurnal cycles should not be omitted even in simple climate models.

[BibTex]

[BibTex]


no image
A New Method for Constructing Artificial Neural Networks

Vapnik, V., Burges, C., Schölkopf, B.

AT & T Bell Laboratories, 1995 (techreport)

[BibTex]

[BibTex]


no image
Image segmentation from motion: just the loss of high-spatial-frequency content ?

Wichmann, F., Henning, G.

Perception, 24, pages: S19, 1995 (poster)

Abstract
The human contrast sensitivity function (CSF) is bandpass for stimuli of low temporal frequency but, for moving stimuli, results in a low-pass CSF with large high spatial-frequency losses. Thus the high spatial-frequency content of images moving on the retina cannot be seen; motion perception could be facilitated by, or even be based on, the selective loss of high spatial-frequency content. 2-AFC image segmentation experiments were conducted with segmentation based on motion or on form. In the latter condition, the form difference mirrored that produced by moving stimuli. This was accomplished by generating stimulus elements which were spectrally either broadband or low-pass. For the motion used, the spectral difference between static broadband and static low-pass elements matched the spectral difference between moving and static broadband elements. On the hypothesis that segmentation from motion is based on the detection of regions devoid of high spatial-frequencies, both tasks should be similarly difficult for human observers. However, neither image segmentation (nor, incidentally, motion detection) was sensitive to the high spatial-frequency content of the stimuli. Thus changes in perceptual form produced by moving stimuli appear not to be used as a cue for image segmentation.

[BibTex]