Header logo is ei


2000


no image
Engineering Support Vector Machine Kernels That Recognize Translation Initiation Sites

Zien, A., Rätsch, G., Mika, S., Schölkopf, B., Lengauer, T., Müller, K.

Bioinformatics, 16(9):799-807, September 2000 (article)

Abstract
Motivation: In order to extract protein sequences from nucleotide sequences, it is an important step to recognize points at which regions start that code for proteins. These points are called translation initiation sites (TIS). Results: The task of finding TIS can be modeled as a classification problem. We demonstrate the applicability of support vector machines for this task, and show how to incorporate prior biological knowledge by engineering an appropriate kernel function. With the described techniques the recognition performance can be improved by 26% over leading existing approaches. We provide evidence that existing related methods (e.g. ESTScan) could profit from advanced TIS recognition.

Web DOI [BibTex]

2000

Web DOI [BibTex]


no image
Three-dimensional reconstruction of planar scenes

Urbanek, M.

Biologische Kybernetik, INP Grenoble, Warsaw University of Technology, September 2000 (diplomathesis)

Abstract
For a planar scene, we propose an algorithm to estimate its 3D structure. Homographies between corresponding planes are employed in order to recover camera motion parameters - between camera positions from which images of the scene were taken. Cases of one- and multiple- corresponding planes present on the scene are distinguished. Solutions are proposed for both cases.

ZIP [BibTex]

ZIP [BibTex]


no image
Analysis of Gene Expression Data with Pathway Scores

Zien, A., Küffner, R., Zimmer, R., Lengauer, T.

In ISMB 2000, pages: 407-417, AAAI Press, Menlo Park, CA, USA, 8th International Conference on Intelligent Systems for Molecular Biology, August 2000 (inproceedings)

Abstract
We present a new approach for the evaluation of gene expression data. The basic idea is to generate biologically possible pathways and to score them with respect to gene expression measurements. We suggest sample scoring functions for different problem specifications. The significance of the scores for the investigated pathways is assessed by comparison to a number of scores for random pathways. We show that simple scoring functions can assign statistically significant scores to biologically relevant pathways. This suggests that the combination of appropriate scoring functions with the systematic generation of pathways can be used in order to select the most interesting pathways based on gene expression measurements.

PDF [BibTex]

PDF [BibTex]


no image
A Meanfield Approach to the Thermodynamics of a Protein-Solvent System with Application to the Oligomerization of the Tumour Suppressor p53.

Noolandi, J., Davison, TS., Vokel, A., Nie, F., Kay, C., Arrowsmith, C.

Proceedings of the National Academy of Sciences of the United States of America, 97(18):9955-9960, August 2000 (article)

Web [BibTex]

Web [BibTex]


no image
Observational Learning with Modular Networks

Shin, H., Lee, H., Cho, S.

In Lecture Notes in Computer Science (LNCS 1983), LNCS 1983, pages: 126-132, Springer-Verlag, Heidelberg, International Conference on Intelligent Data Engineering and Automated Learning (IDEAL), July 2000 (inproceedings)

Abstract
Observational learning algorithm is an ensemble algorithm where each network is initially trained with a bootstrapped data set and virtual data are generated from the ensemble for training. Here we propose a modular OLA approach where the original training set is partitioned into clusters and then each network is instead trained with one of the clusters. Networks are combined with different weighting factors now that are inversely proportional to the distance from the input vector to the cluster centers. Comparison with bagging and boosting shows that the proposed approach reduces generalization error with a smaller number of networks employed.

PDF [BibTex]

PDF [BibTex]


no image
The Infinite Gaussian Mixture Model

Rasmussen, CE.

In Advances in Neural Information Processing Systems 12, pages: 554-560, (Editors: Solla, S.A. , T.K. Leen, K-R Müller), MIT Press, Cambridge, MA, USA, Thirteenth Annual Neural Information Processing Systems Conference (NIPS), June 2000 (inproceedings)

Abstract
In a Bayesian mixture model it is not necessary a priori to limit the number of components to be finite. In this paper an infinite Gaussian mixture model is presented which neatly sidesteps the difficult problem of finding the ``right'' number of mixture components. Inference in the model is done using an efficient parameter-free Markov Chain that relies entirely on Gibbs sampling.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Generalization Abilities of Ensemble Learning Algorithms

Shin, H., Jang, M., Cho, S.

In Proc. of the Korean Brain Society Conference, pages: 129-133, Korean Brain Society Conference, June 2000 (inproceedings)

[BibTex]

[BibTex]


no image
Support vector method for novelty detection

Schölkopf, B., Williamson, R., Smola, A., Shawe-Taylor, J., Platt, J.

In Advances in Neural Information Processing Systems 12, pages: 582-588, (Editors: SA Solla and TK Leen and K-R Müller), MIT Press, Cambridge, MA, USA, 13th Annual Neural Information Processing Systems Conference (NIPS), June 2000 (inproceedings)

Abstract
Suppose you are given some dataset drawn from an underlying probability distribution ¤ and you want to estimate a “simple” subset ¥ of input space such that the probability that a test point drawn from ¤ lies outside of ¥ equals some a priori specified ¦ between § and ¨. We propose a method to approach this problem by trying to estimate a function © which is positive on ¥ and negative on the complement. The functional form of © is given by a kernel expansion in terms of a potentially small subset of the training data; it is regularized by controlling the length of the weight vector in an associated feature space. We provide a theoretical analysis of the statistical performance of our algorithm. The algorithm is a natural extension of the support vector algorithm to the case of unlabelled data.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Solving Satisfiability Problems with Genetic Algorithms

Harmeling, S.

In Genetic Algorithms and Genetic Programming at Stanford 2000, pages: 206-213, (Editors: Koza, J. R.), Stanford Bookstore, Stanford, CA, USA, June 2000 (inbook)

Abstract
We show how to solve hard 3-SAT problems using genetic algorithms. Furthermore, we explore other genetic operators that may be useful to tackle 3-SAT problems, and discuss their pros and cons.

PDF [BibTex]

PDF [BibTex]


no image
v-Arc: Ensemble Learning in the Presence of Outliers

Rätsch, G., Schölkopf, B., Smola, A., Müller, K., Onoda, T., Mika, S.

In Advances in Neural Information Processing Systems 12, pages: 561-567, (Editors: SA Solla and TK Leen and K-R Müller), MIT Press, Cambridge, MA, USA, 13th Annual Neural Information Processing Systems Conference (NIPS), June 2000 (inproceedings)

Abstract
AdaBoost and other ensemble methods have successfully been applied to a number of classification tasks, seemingly defying problems of overfitting. AdaBoost performs gradient descent in an error function with respect to the margin, asymptotically concentrating on the patterns which are hardest to learn. For very noisy problems, however, this can be disadvantageous. Indeed, theoretical analysis has shown that the margin distribution, as opposed to just the minimal margin, plays a crucial role in understanding this phenomenon. Loosely speaking, some outliers should be tolerated if this has the benefit of substantially increasing the margin on the remaining points. We propose a new boosting algorithm which allows for the possibility of a pre-specified fraction of points to lie in the margin area or even on the wrong side of the decision boundary.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Invariant feature extraction and classification in kernel spaces

Mika, S., Rätsch, G., Weston, J., Schölkopf, B., Smola, A., Müller, K.

In Advances in neural information processing systems 12, pages: 526-532, (Editors: SA Solla and TK Leen and K-R Müller), MIT Press, Cambridge, MA, USA, 13th Annual Neural Information Processing Systems Conference (NIPS), June 2000 (inproceedings)

PDF Web [BibTex]

PDF Web [BibTex]


no image
Transductive Inference for Estimating Values of Functions

Chapelle, O., Vapnik, V., Weston, J.

In Advances in Neural Information Processing Systems 12, pages: 421-427, (Editors: Solla, S.A. , T.K. Leen, K-R Müller), MIT Press, Cambridge, MA, USA, Thirteenth Annual Neural Information Processing Systems Conference (NIPS), June 2000 (inproceedings)

Abstract
We introduce an algorithm for estimating the values of a function at a set of test points $x_1^*,dots,x^*_m$ given a set of training points $(x_1,y_1),dots,(x_ell,y_ell)$ without estimating (as an intermediate step) the regression function. We demonstrate that this direct (transductive) way for estimating values of the regression (or classification in pattern recognition) is more accurate than the traditional one based on two steps, first estimating the function and then calculating the values of this function at the points of interest.

PDF Web [BibTex]

PDF Web [BibTex]


no image
The entropy regularization information criterion

Smola, A., Shawe-Taylor, J., Schölkopf, B., Williamson, R.

In Advances in Neural Information Processing Systems 12, pages: 342-348, (Editors: SA Solla and TK Leen and K-R Müller), MIT Press, Cambridge, MA, USA, 13th Annual Neural Information Processing Systems Conference (NIPS), June 2000 (inproceedings)

Abstract
Effective methods of capacity control via uniform convergence bounds for function expansions have been largely limited to Support Vector machines, where good bounds are obtainable by the entropy number approach. We extend these methods to systems with expansions in terms of arbitrary (parametrized) basis functions and a wide range of regularization methods covering the whole range of general linear additive models. This is achieved by a data dependent analysis of the eigenvalues of the corresponding design matrix.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Model Selection for Support Vector Machines

Chapelle, O., Vapnik, V.

In Advances in Neural Information Processing Systems 12, pages: 230-236, (Editors: Solla, S.A. , T.K. Leen, K-R Müller), MIT Press, Cambridge, MA, USA, Thirteenth Annual Neural Information Processing Systems Conference (NIPS), June 2000 (inproceedings)

Abstract
New functionals for parameter (model) selection of Support Vector Machines are introduced based on the concepts of the span of support vectors and rescaling of the feature space. It is shown that using these functionals, one can both predict the best choice of parameters of the model and the relative quality of performance for any value of parameter.

PDF Web [BibTex]

PDF Web [BibTex]


no image
New Support Vector Algorithms

Schölkopf, B., Smola, A., Williamson, R., Bartlett, P.

Neural Computation, 12(5):1207-1245, May 2000 (article)

Abstract
We propose a new class of support vector algorithms for regression and classification. In these algorithms, a parameter {nu} lets one effectively control the number of support vectors. While this can be useful in its own right, the parameterization has the additional benefit of enabling us to eliminate one of the other free parameters of the algorithm: the accuracy parameter {epsilon} in the regression case, and the regularization constant C in the classification case. We describe the algorithms, give some theoretical results concerning the meaning and the choice of {nu}, and report experimental results.

Web DOI [BibTex]

Web DOI [BibTex]


no image
Generalization Abilities of Ensemble Learning Algorithms: OLA, Bagging, Boosting

Shin, H., Jang, M., Cho, S., Lee, B., Lim, Y.

In Proc. of the Korea Information Science Conference, pages: 226-228, Conference on Korean Information Science, April 2000 (inproceedings)

[BibTex]

[BibTex]


no image
A simple iterative approach to parameter optimization

Zien, A., Zimmer, R., Lengauer, T.

In RECOMB2000, pages: 318-327, ACM Press, New York, NY, USA, Forth Annual Conference on Research in Computational Molecular Biology, April 2000 (inproceedings)

Abstract
Various bioinformatics problems require optimizing several different properties simultaneously. For example, in the protein threading problem, a linear scoring function combines the values for different properties of possible sequence-to-structure alignments into a single score to allow for unambigous optimization. In this context, an essential question is how each property should be weighted. As the native structures are known for some sequences, the implied partial ordering on optimal alignments may be used to adjust the weights. To resolve the arising interdependence of weights and computed solutions, we propose a novel approach: iterating the computation of solutions (here: threading alignments) given the weights and the estimation of optimal weights of the scoring function given these solutions via a systematic calibration method. We show that this procedure converges to structurally meaningful weights, that also lead to significantly improved performance on comprehensive test data sets as measured in different ways. The latter indicates that the performance of threading can be improved in general.

Web DOI [BibTex]

Web DOI [BibTex]


no image
Contrast discrimination using periodic pulse trains

Wichmann, F., Henning, G.

pages: 74, 3. T{\"u}binger Wahrnehmungskonferenz (TWK), February 2000 (poster)

Abstract
Understanding contrast transduction is essential for understanding spatial vision. Previous research (Wichmann et al. 1998; Wichmann, 1999; Henning and Wichmann, 1999) has demonstrated the importance of high contrasts to distinguish between alternative models of contrast discrimination. However, the modulation transfer function of the eye imposes large contrast losses on stimuli, particularly for stimuli of high spatial frequency, making high retinal contrasts difficult to obtain using sinusoidal gratings. Standard 2AFC contrast discrimination experiments were conducted using periodic pulse trains as stimuli. Given our Mitsubishi display we achieve stimuli with up to 160% contrast at the fundamental frequency. The shape of the threshold versus (pedestal) contrast (TvC) curve using pulse trains shows the characteristic dipper shape, i.e. contrast discrimination is sometimes “easier” than detection. The rising part of the TvC function has the same slope as that measured for contrast discrimination using sinusoidal gratings of the same frequency as the fundamental. Periodic pulse trains offer the possibility to explore the visual system’s properties using high retinal contrasts. Thus they might prove useful in tasks other than contrast discrimination. Second, at least for high spatial frequencies (8 c/deg) it appears that contrast discrimination using sinusoids and periodic pulse trains results in virtually identical TvC functions, indicating a lack of probability summation. Further implications of these results are discussed.

Web [BibTex]

Web [BibTex]


no image
Subliminale Darbietung verkehrsrelevanter Information in Kraftfahrzeugen

Staedtgen, M., Hahn, S., Franz, MO., Spitzer, M.

pages: 98, (Editors: H.H. Bülthoff, K.R. Gegenfurtner, H.A. Mallot), 3. T{\"u}binger Wahrnehmungskonferenz (TWK), February 2000 (poster)

Abstract
Durch moderne Bildverarbeitungstechnologien ist es m{\"o}glich, in Kraftfahrzeugen bestimmte kritische Verkehrssituationen automatisch zu erkennen und den Fahrer zu warnen bzw. zu informieren. Ein Problem ist dabei die Darbietung der Ergebnisse, die den Fahrer m{\"o}glichst wenig belasten und seine Aufmerksamkeit nicht durch zus{\"a}tzliche Warnleuchten oder akustische Signale vom Verkehrsgeschehen ablenken soll. In einer Reihe von Experimenten wurde deshalb untersucht, ob subliminal dargebotene, das heißt nicht bewußt wahrgenommene, verkehrsrelevante Informationen verhaltenswirksam werden und zur Informations{\"u}bermittlung an den Fahrer genutzt werden k{\"o}nnen. In einem Experiment zur semantischen Bahnung konnte mit Hilfe einer lexikalischen Entscheidungsaufgabe gezeigt werden, daß auf den Straßenverkehr bezogene Worte schneller verarbeitet werden, wenn vorher ein damit in Zusammenhang stehendes Bild eines Verkehrsschildes subliminal pr{\"a}sentiert wurde. Auch bei parafovealer Darbietung der subliminalen Stimuli wurde eine Beschleunigung erzielt. In einer visuellen Suchaufgabe wurden in Bildern realer Verkehrssituationen Verkehrszeichen schneller entdeckt, wenn das Bild des Verkehrszeichens vorher subliminal dargeboten wurde. In beiden Experimenten betrug die Pr{\"a}sentationszeit f{\"u}r die Hinweisreize 17 ms, zus{\"a}tzlich wurde durch Vorw{\"a}rts- und R{\"u}ckw{\"a}rtsmaskierung die bewußteWahrnehmung verhindert. Diese Laboruntersuchungen zeigten, daß sich auch im Kontext des Straßenverkehrs Beschleunigungen der Informationsverarbeitung durch subliminal dargebotene Stimuli erreichen lassen. In einem dritten Experiment wurde die Darbietung eines subliminalen Hinweisreizes auf die Reaktionszeit beim Bremsen in einem realen Fahrversuch untersucht. Die Versuchspersonen (n=17) sollten so schnell wie m{\"o}glich bremsen, wenn die Bremsleuchten eines im Abstand von 12-15 m voran fahrenden Fahrzeuges aufleuchteten. In 50 von insgesamt 100 Durchg{\"a}ngen wurde ein subliminaler Stimulus (zwei rote Punkte mit einem Zentimeter Durchmesser und zehn Zentimeter Abstand) 150 ms vor Aufleuchten der Bremslichter pr{\"a}sentiert. Die Darbietung erfolgte durch ein im Auto an Stelle des Tachometers integriertes TFT-LCD Display. Im Vergleich zur Reaktion ohne subliminalen Stimulus verk{\"u}rzte sich die Reaktionszeit dadurch signifikant um 51 ms. In den beschriebenen Experimenten konnte gezeigt werden, daß die subliminale Darbietung verkehrsrelevanter Information auch in Kraftfahrzeugen verhaltenswirksam werden kann. In Zukunft k{\"o}nnte durch die Kombination der online-Bildverarbeitung im Kraftfahrzeug mit subliminaler Darbietung der Ergebnisse eine Erh{\"o}hung der Verkehrssicherheit und des Komforts erreicht werden.

Web [BibTex]

Web [BibTex]


no image
Statistical Learning and Kernel Methods

Schölkopf, B.

In CISM Courses and Lectures, International Centre for Mechanical Sciences Vol.431, CISM Courses and Lectures, International Centre for Mechanical Sciences, 431(23):3-24, (Editors: G Della Riccia and H-J Lenz and R Kruse), Springer, Vienna, Data Fusion and Perception, 2000 (inbook)

[BibTex]

[BibTex]


no image
Bounds on Error Expectation for Support Vector Machines

Vapnik, V., Chapelle, O.

Neural Computation, 12(9):2013-2036, 2000 (article)

Abstract
We introduce the concept of span of support vectors (SV) and show that the generalization ability of support vector machines (SVM) depends on this new geometrical concept. We prove that the value of the span is always smaller (and can be much smaller) than the diameter of the smallest sphere containing th e support vectors, used in previous bounds. We also demonstate experimentally that the prediction of the test error given by the span is very accurate and has direct application in model selection (choice of the optimal parameters of the SVM)

GZIP [BibTex]

GZIP [BibTex]


no image
Intelligence as a Complex System

Zhou, D.

Biologische Kybernetik, 2000 (phdthesis)

[BibTex]

[BibTex]


no image
Neural Networks in Robot Control

Peters, J.

Biologische Kybernetik, Fernuniversität Hagen, Hagen, Germany, 2000 (diplomathesis)

[BibTex]

[BibTex]


no image
Bayesian modelling of fMRI time series

, PADFR., Rasmussen, CE., Hansen, LK.

In pages: 754-760, (Editors: Sara A. Solla, Todd K. Leen and Klaus-Robert Müller), 2000 (inproceedings)

Abstract
We present a Hidden Markov Model (HMM) for inferring the hidden psychological state (or neural activity) during single trial fMRI activation experiments with blocked task paradigms. Inference is based on Bayesian methodology, using a combination of analytical and a variety of Markov Chain Monte Carlo (MCMC) sampling techniques. The advantage of this method is that detection of short time learning effects between repeated trials is possible since inference is based only on single trial experiments.

PDF PostScript [BibTex]

PDF PostScript [BibTex]


no image
An Introduction to Kernel-Based Learning Algorithms

Müller, K., Mika, S., Rätsch, G., Tsuda, K., Schölkopf, B.

In Handbook of Neural Network Signal Processing, 4, (Editors: Yu Hen Hu and Jang-Neng Hwang), CRC Press, 2000 (inbook)

[BibTex]

[BibTex]


no image
Choosing nu in support vector regression with different noise models — theory and experiments

Chalimourda, A., Schölkopf, B., Smola, A.

In Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks, IJCNN 2000, Neural Computing: New Challenges and Perspectives for the New Millennium, IEEE, International Joint Conference on Neural Networks, 2000 (inproceedings)

[BibTex]

[BibTex]


no image
A High Resolution and Accurate Pentium Based Timer

Ong, CS., Wong, F., Lai, WK.

In 2000 (inproceedings)

PDF [BibTex]

PDF [BibTex]


no image
Robust Ensemble Learning for Data Mining

Rätsch, G., Schölkopf, B., Smola, A., Mika, S., Onoda, T., Müller, K.

In Fourth Pacific-Asia Conference on Knowledge Discovery and Data Mining, 1805, pages: 341-341, Lecture Notes in Artificial Intelligence, (Editors: H. Terano), Fourth Pacific-Asia Conference on Knowledge Discovery and Data Mining, 2000 (inproceedings)

[BibTex]

[BibTex]


no image
Sparse greedy matrix approximation for machine learning.

Smola, A., Schölkopf, B.

In 17th International Conference on Machine Learning, Stanford, 2000, pages: 911-918, (Editors: P Langley), Morgan Kaufman, San Fransisco, CA, USA, 17th International Conference on Machine Learning (ICML), 2000 (inproceedings)

[BibTex]

[BibTex]


no image
The Kernel Trick for Distances

Schölkopf, B.

(MSR-TR-2000-51), Microsoft Research, Redmond, WA, USA, 2000 (techreport)

Abstract
A method is described which, like the kernel trick in support vector machines (SVMs), lets us generalize distance-based algorithms to operate in feature spaces, usually nonlinearly related to the input space. This is done by identifying a class of kernels which can be represented as normbased distances in Hilbert spaces. It turns out that common kernel algorithms, such as SVMs and kernel PCA, are actually really distance based algorithms and can be run with that class of kernels, too. As well as providing a useful new insight into how these algorithms work, the present work can form the basis for conceiving new algorithms.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Entropy Numbers of Linear Function Classes.

Williamson, R., Smola, A., Schölkopf, B.

In 13th Annual Conference on Computational Learning Theory, pages: 309-319, (Editors: N Cesa-Bianchi and S Goldman), Morgan Kaufman, San Fransisco, CA, USA, 13th Annual Conference on Computational Learning Theory (COLT), 2000 (inproceedings)

[BibTex]

[BibTex]


no image
Kernel method for percentile feature extraction

Schölkopf, B., Platt, J., Smola, A.

(MSR-TR-2000-22), Microsoft Research, 2000 (techreport)

Abstract
A method is proposed which computes a direction in a dataset such that a speci􏰘ed fraction of a particular class of all examples is separated from the overall mean by a maximal margin􏰤 The pro jector onto that direction can be used for class􏰣speci􏰘c feature extraction􏰤 The algorithm is carried out in a feature space associated with a support vector kernel function􏰢 hence it can be used to construct a large class of nonlinear fea􏰣 ture extractors􏰤 In the particular case where there exists only one class􏰢 the method can be thought of as a robust form of principal component analysis􏰢 where instead of variance we maximize percentile thresholds􏰤 Fi􏰣 nally􏰢 we generalize it to also include the possibility of specifying negative examples􏰤

PDF [BibTex]

PDF [BibTex]

1998


no image
Book Review: An Introduction to Fuzzy Logic for Practical Applications

Peters, J.

K{\"u}nstliche Intelligenz (KI), 98(4):60-60, November 1998 (article)

[BibTex]

1998

[BibTex]


no image
Navigation mit Schnappschüssen

Franz, M., Schölkopf, B., Mallot, H., Bülthoff, H., Zell, A.

In Mustererkennung 1998, pages: 421-428, (Editors: P Levi and R-J Ahlers and F May and M Schanz), Springer, Berlin, Germany, 20th DAGM-Symposium, October 1998 (inproceedings)

Abstract
Es wird ein biologisch inspirierter Algorithmus vorgestellt, mit dem sich ein Ort wiederfinden l{\"a}sst, an dem vorher eine 360-Grad-Ansicht der Umgebung aufgenommen wurde. Die Zielrichtung wird aus der Verschiebung der Bildposition der umgebenden Landmarken im Vergleich zum Schnappschuss berechnet. Die Konvergenzeigenschaften des Algorithmus werden mathematisch untersucht und auf mobilen Robotern getestet.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Where did I take that snapshot? Scene-based homing by image matching

Franz, M., Schölkopf, B., Bülthoff, H.

Biological Cybernetics, 79(3):191-202, October 1998 (article)

Abstract
In homing tasks, the goal is often not marked by visible objects but must be inferred from the spatial relation to the visual cues in the surrounding scene. The exact computation of the goal direction would require knowledge about the distances to visible landmarks, information, which is not directly available to passive vision systems. However, if prior assumptions about typical distance distributions are used, a snapshot taken at the goal suffices to compute the goal direction from the current view. We show that most existing approaches to scene-based homing implicitly assume an isotropic landmark distribution. As an alternative, we propose a homing scheme that uses parameterized displacement fields. These are obtained from an approximation that incorporates prior knowledge about perspective distortions of the visual environment. A mathematical analysis proves that both approximations do not prevent the schemes from approaching the goal with arbitrary accuracy, but lead to different errors in the computed goal direction. Mobile robot experiments are used to test the theoretical predictions and to demonstrate the practical feasibility of the new approach.

PDF PDF DOI [BibTex]

PDF PDF DOI [BibTex]


no image
On a Kernel-Based Method for Pattern Recognition, Regression, Approximation, and Operator Inversion

Smola, A., Schölkopf, B.

Algorithmica, 22(1-2):211-231, September 1998 (article)

Abstract
We present a kernel-based framework for pattern recognition, regression estimation, function approximation, and multiple operator inversion. Adopting a regularization-theoretic framework, the above are formulated as constrained optimization problems. Previous approaches such as ridge regression, support vector methods, and regularization networks are included as special cases. We show connections between the cost function and some properties up to now believed to apply to support vector machines only. For appropriately chosen cost functions, the optimal solution of all the problems described above can be found by solving a simple quadratic programming problem.

PDF DOI [BibTex]


no image
The moon tilt illusion

Schölkopf, B.

Perception, 27(10):1229-1232, August 1998 (article)

Abstract
Besides the familiar moon illusion [eg Hershenson, 1989 The Moon illusion (Hillsdale, NJ: Lawrence Erlbaum Associates)], wherein the moon appears bigger when it is close to the horizon, there is a less known illusion which causes the moon‘s illuminated side to appear turned away from the direction of the sun. An experiment documenting the effect is described, and a possible explanation is put forward.

Web DOI [BibTex]

Web DOI [BibTex]


no image
Characterization of the oligomerization defects of two p53 mutants found in families with Li-Fraumeni and Li-Fraumeni-like syndrome.

Davison, T., Yin, P., Nie, E., Kay, C., CH, ..

Oncogene, 17(5):651-656, August 1998 (article)

Abstract
Recently two germline mutations in the oligomerization domain of p53 have been identified in patients with Li-Fraumeni and Li-Fraumeni-like Syndromes. We have used biophysical and biochemical methods to characterize these two mutants in order to better understand their functional defects and the role of the p53 oligomerization domain (residues 325-355) in oncogenesis. We find that residues 310-360 of the L344P mutant are monomeric, apparently unfolded and cannot interact with wild-type (WT) p53. The full length L344P protein is unable to bind sequence specifically to DNA and is therefore an inactive, but not a dominant negative mutant. R337C, on the other hand, can form dimers and tetramers, can hetero-oligomerize with WTp53 and can bind to a p53 consensus element. However, the thermal stability of R337C is much lower than that of WTp53 and at physiological temperatures more than half of this mutant is less than tetrameric. Thus, the R337C mutant retains some functional activity yet leads to a predisposition to cancer, suggesting that even partial inactivation of p53 oligomerization is sufficient for accelerated tumour progression.

Web [BibTex]


no image
Nonlinear Component Analysis as a Kernel Eigenvalue Problem

Schölkopf, B., Smola, A., Müller, K.

Neural Computation, 10(5):1299-1319, July 1998 (article)

Abstract
A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map—for instance, the space of all possible five-pixel products in 16 × 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.

Web DOI [BibTex]

Web DOI [BibTex]


no image
SVMs — a practical consequence of learning theory

Schölkopf, B.

IEEE Intelligent Systems and their Applications, 13(4):18-21, July 1998 (article)

Abstract
My first exposure to Support Vector Machines came this spring when heard Sue Dumais present impressive results on text categorization using this analysis technique. This issue's collection of essays should help familiarize our readers with this interesting new racehorse in the Machine Learning stable. Bernhard Scholkopf, in an introductory overview, points out that a particular advantage of SVMs over other learning algorithms is that it can be analyzed theoretically using concepts from computational learning theory, and at the same time can achieve good performance when applied to real problems. Examples of these real-world applications are provided by Sue Dumais, who describes the aforementioned text-categorization problem, yielding the best results to date on the Reuters collection, and Edgar Osuna, who presents strong results on application to face detection. Our fourth author, John Platt, gives us a practical guide and a new technique for implementing the algorithm efficiently.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Support vector machines

Hearst, M., Dumais, S., Osman, E., Platt, J., Schölkopf, B.

IEEE Intelligent Systems and their Applications, 13(4):18-28, July 1998 (article)

Abstract
My first exposure to Support Vector Machines came this spring when heard Sue Dumais present impressive results on text categorization using this analysis technique. This issue's collection of essays should help familiarize our readers with this interesting new racehorse in the Machine Learning stable. Bernhard Scholkopf, in an introductory overview, points out that a particular advantage of SVMs over other learning algorithms is that it can be analyzed theoretically using concepts from computational learning theory, and at the same time can achieve good performance when applied to real problems. Examples of these real-world applications are provided by Sue Dumais, who describes the aforementioned text-categorization problem, yielding the best results to date on the Reuters collection, and Edgar Osuna, who presents strong results on application to face detection. Our fourth author, John Platt, gives us a practical guide and a new technique for implementing the algorithm efficiently.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
The connection between regularization operators and support vector kernels.

Smola, A., Schölkopf, B., Müller, K.

Neural Networks, 11(4):637-649, June 1998 (article)

Abstract
n this paper a correspondence is derived between regularization operators used in regularization networks and support vector kernels. We prove that the Green‘s Functions associated with regularization operators are suitable support vector kernels with equivalent regularization properties. Moreover, the paper provides an analysis of currently used support vector kernels in the view of regularization theory and corresponding operators associated with the classes of both polynomial kernels and translation invariant kernels. The latter are also analyzed on periodical domains. As a by-product we show that a large number of radial basis functions, namely conditionally positive definite functions, may be used as support vector kernels.

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Prior knowledge in support vector kernels

Schölkopf, B., Simard, P., Smola, A., Vapnik, V.

In Advances in Neural Information Processing Systems 10, pages: 640-646 , (Editors: M Jordan and M Kearns and S Solla ), MIT Press, Cambridge, MA, USA, Eleventh Annual Conference on Neural Information Processing (NIPS), June 1998 (inproceedings)

PDF Web [BibTex]

PDF Web [BibTex]


no image
From regularization operators to support vector kernels

Smola, A., Schölkopf, B.

In Advances in Neural Information Processing Systems 10, pages: 343-349, (Editors: M Jordan and M Kearns and S Solla), MIT Press, Cambridge, MA, USA, 11th Annual Conference on Neural Information Processing (NIPS), June 1998 (inproceedings)

PDF Web [BibTex]

PDF Web [BibTex]


no image
Eine beweistheoretische Anwendung der

Harmeling, S.

Biologische Kybernetik, Westfälische Wilhelms-Universität Münster, Münster, May 1998 (diplomathesis)

PDF [BibTex]

PDF [BibTex]


no image
Qualitative Modeling for Data Miner’s Requirements

Shin, H., Jhee, W.

In Proc. of the Korean Management Information Systems, pages: 65-73, Conference on the Korean Management Information Systems, April 1998 (inproceedings)

[BibTex]

[BibTex]


no image
Übersicht durch Übersehen

Schölkopf, B.

Frankfurter Allgemeine Zeitung , Wissenschaftsbeilage, March 1998 (misc)

[BibTex]

[BibTex]


no image
Learning view graphs for robot navigation

Franz, M., Schölkopf, B., Mallot, H., Bülthoff, H.

Autonomous Robots, 5(1):111-125, March 1998 (article)

Abstract
We present a purely vision-based scheme for learning a topological representation of an open environment. The system represents selected places by local views of the surrounding scene, and finds traversable paths between them. The set of recorded views and their connections are combined into a graph model of the environment. To navigate between views connected in the graph, we employ a homing strategy inspired by findings of insect ethology. In robot experiments, we demonstrate that complex visual exploration and navigation tasks can thus be performed without using metric information.

PDF PDF DOI [BibTex]

PDF PDF DOI [BibTex]


no image
Masking by plaid patterns: effects of presentation time and mask contrast

Wichmann, F., Henning, G.

pages: 115, 1. T{\"u}binger Wahrnehmungskonferenz (TWK 98), February 1998 (poster)

Abstract
Most current models of early spatial vision comprise of sets of orientation- and spatial-frequency selective filters with our without limited non-linear interactions amongst different subsets of the filters. The performance of human observers and of such models for human spatial vision were compared in experiments using maskers with two spatial frequencies (plaid masks). The detectability of horizontally orientated sinusoidal signals at 3.02 c/deg was measured in standard 2AFC-tasks in the presence of plaid patterns with two-components at the same spatial frequency as the signal but at different orientations (+/- 15, 30, 45, and 75 deg from the signal) and with varying contrasts (1.0, 6.25 and 25.0% contrast). In addition, the temporal envelope of the stimulus presentation was either a rectangular pulse of 19.7 msec duration, or a temporal Hanning window of 1497 msec.Threshold elevation varied with plaid component orientation, peaked +/- 30 deg from the signal where nearly a log unit threshold elevation for the 25.0% contrast plaid was observed. For plaids with 1.0% contrast we observed significant facilitation even with plaids whose components were 75 deg from that of the signal. Elevation factors were somewhat lower for the short stimulus presentation time but were still significant (up to a factor of 5 or 6). Despite of the simple nature of the stimuli employed in this study-sinusoidal signal and plaid masks comprised of only two sinusoids-none of the current models of early spatial vision can fully account for all the data gathered.

Web [BibTex]

Web [BibTex]