Header logo is ei


2000


no image
A Meanfield Approach to the Thermodynamics of a Protein-Solvent System with Application to the Oligomerization of the Tumour Suppressor p53.

Noolandi, J., Davison, TS., Vokel, A., Nie, F., Kay, C., Arrowsmith, C.

Proceedings of the National Academy of Sciences of the United States of America, 97(18):9955-9960, August 2000 (article)

Web [BibTex]

2000

Web [BibTex]


no image
Observational Learning with Modular Networks

Shin, H., Lee, H., Cho, S.

In Lecture Notes in Computer Science (LNCS 1983), LNCS 1983, pages: 126-132, Springer-Verlag, Heidelberg, International Conference on Intelligent Data Engineering and Automated Learning (IDEAL), July 2000 (inproceedings)

Abstract
Observational learning algorithm is an ensemble algorithm where each network is initially trained with a bootstrapped data set and virtual data are generated from the ensemble for training. Here we propose a modular OLA approach where the original training set is partitioned into clusters and then each network is instead trained with one of the clusters. Networks are combined with different weighting factors now that are inversely proportional to the distance from the input vector to the cluster centers. Comparison with bagging and boosting shows that the proposed approach reduces generalization error with a smaller number of networks employed.

PDF [BibTex]

PDF [BibTex]


no image
The Infinite Gaussian Mixture Model

Rasmussen, CE.

In Advances in Neural Information Processing Systems 12, pages: 554-560, (Editors: Solla, S.A. , T.K. Leen, K-R Müller), MIT Press, Cambridge, MA, USA, Thirteenth Annual Neural Information Processing Systems Conference (NIPS), June 2000 (inproceedings)

Abstract
In a Bayesian mixture model it is not necessary a priori to limit the number of components to be finite. In this paper an infinite Gaussian mixture model is presented which neatly sidesteps the difficult problem of finding the ``right'' number of mixture components. Inference in the model is done using an efficient parameter-free Markov Chain that relies entirely on Gibbs sampling.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Generalization Abilities of Ensemble Learning Algorithms

Shin, H., Jang, M., Cho, S.

In Proc. of the Korean Brain Society Conference, pages: 129-133, Korean Brain Society Conference, June 2000 (inproceedings)

[BibTex]

[BibTex]


no image
Support vector method for novelty detection

Schölkopf, B., Williamson, R., Smola, A., Shawe-Taylor, J., Platt, J.

In Advances in Neural Information Processing Systems 12, pages: 582-588, (Editors: SA Solla and TK Leen and K-R Müller), MIT Press, Cambridge, MA, USA, 13th Annual Neural Information Processing Systems Conference (NIPS), June 2000 (inproceedings)

Abstract
Suppose you are given some dataset drawn from an underlying probability distribution ¤ and you want to estimate a “simple” subset ¥ of input space such that the probability that a test point drawn from ¤ lies outside of ¥ equals some a priori specified ¦ between § and ¨. We propose a method to approach this problem by trying to estimate a function © which is positive on ¥ and negative on the complement. The functional form of © is given by a kernel expansion in terms of a potentially small subset of the training data; it is regularized by controlling the length of the weight vector in an associated feature space. We provide a theoretical analysis of the statistical performance of our algorithm. The algorithm is a natural extension of the support vector algorithm to the case of unlabelled data.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Solving Satisfiability Problems with Genetic Algorithms

Harmeling, S.

In Genetic Algorithms and Genetic Programming at Stanford 2000, pages: 206-213, (Editors: Koza, J. R.), Stanford Bookstore, Stanford, CA, USA, June 2000 (inbook)

Abstract
We show how to solve hard 3-SAT problems using genetic algorithms. Furthermore, we explore other genetic operators that may be useful to tackle 3-SAT problems, and discuss their pros and cons.

PDF [BibTex]

PDF [BibTex]


no image
v-Arc: Ensemble Learning in the Presence of Outliers

Rätsch, G., Schölkopf, B., Smola, A., Müller, K., Onoda, T., Mika, S.

In Advances in Neural Information Processing Systems 12, pages: 561-567, (Editors: SA Solla and TK Leen and K-R Müller), MIT Press, Cambridge, MA, USA, 13th Annual Neural Information Processing Systems Conference (NIPS), June 2000 (inproceedings)

Abstract
AdaBoost and other ensemble methods have successfully been applied to a number of classification tasks, seemingly defying problems of overfitting. AdaBoost performs gradient descent in an error function with respect to the margin, asymptotically concentrating on the patterns which are hardest to learn. For very noisy problems, however, this can be disadvantageous. Indeed, theoretical analysis has shown that the margin distribution, as opposed to just the minimal margin, plays a crucial role in understanding this phenomenon. Loosely speaking, some outliers should be tolerated if this has the benefit of substantially increasing the margin on the remaining points. We propose a new boosting algorithm which allows for the possibility of a pre-specified fraction of points to lie in the margin area or even on the wrong side of the decision boundary.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Invariant feature extraction and classification in kernel spaces

Mika, S., Rätsch, G., Weston, J., Schölkopf, B., Smola, A., Müller, K.

In Advances in neural information processing systems 12, pages: 526-532, (Editors: SA Solla and TK Leen and K-R Müller), MIT Press, Cambridge, MA, USA, 13th Annual Neural Information Processing Systems Conference (NIPS), June 2000 (inproceedings)

PDF Web [BibTex]

PDF Web [BibTex]


no image
Transductive Inference for Estimating Values of Functions

Chapelle, O., Vapnik, V., Weston, J.

In Advances in Neural Information Processing Systems 12, pages: 421-427, (Editors: Solla, S.A. , T.K. Leen, K-R Müller), MIT Press, Cambridge, MA, USA, Thirteenth Annual Neural Information Processing Systems Conference (NIPS), June 2000 (inproceedings)

Abstract
We introduce an algorithm for estimating the values of a function at a set of test points $x_1^*,dots,x^*_m$ given a set of training points $(x_1,y_1),dots,(x_ell,y_ell)$ without estimating (as an intermediate step) the regression function. We demonstrate that this direct (transductive) way for estimating values of the regression (or classification in pattern recognition) is more accurate than the traditional one based on two steps, first estimating the function and then calculating the values of this function at the points of interest.

PDF Web [BibTex]

PDF Web [BibTex]


no image
The entropy regularization information criterion

Smola, A., Shawe-Taylor, J., Schölkopf, B., Williamson, R.

In Advances in Neural Information Processing Systems 12, pages: 342-348, (Editors: SA Solla and TK Leen and K-R Müller), MIT Press, Cambridge, MA, USA, 13th Annual Neural Information Processing Systems Conference (NIPS), June 2000 (inproceedings)

Abstract
Effective methods of capacity control via uniform convergence bounds for function expansions have been largely limited to Support Vector machines, where good bounds are obtainable by the entropy number approach. We extend these methods to systems with expansions in terms of arbitrary (parametrized) basis functions and a wide range of regularization methods covering the whole range of general linear additive models. This is achieved by a data dependent analysis of the eigenvalues of the corresponding design matrix.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Model Selection for Support Vector Machines

Chapelle, O., Vapnik, V.

In Advances in Neural Information Processing Systems 12, pages: 230-236, (Editors: Solla, S.A. , T.K. Leen, K-R Müller), MIT Press, Cambridge, MA, USA, Thirteenth Annual Neural Information Processing Systems Conference (NIPS), June 2000 (inproceedings)

Abstract
New functionals for parameter (model) selection of Support Vector Machines are introduced based on the concepts of the span of support vectors and rescaling of the feature space. It is shown that using these functionals, one can both predict the best choice of parameters of the model and the relative quality of performance for any value of parameter.

PDF Web [BibTex]

PDF Web [BibTex]


no image
New Support Vector Algorithms

Schölkopf, B., Smola, A., Williamson, R., Bartlett, P.

Neural Computation, 12(5):1207-1245, May 2000 (article)

Abstract
We propose a new class of support vector algorithms for regression and classification. In these algorithms, a parameter {nu} lets one effectively control the number of support vectors. While this can be useful in its own right, the parameterization has the additional benefit of enabling us to eliminate one of the other free parameters of the algorithm: the accuracy parameter {epsilon} in the regression case, and the regularization constant C in the classification case. We describe the algorithms, give some theoretical results concerning the meaning and the choice of {nu}, and report experimental results.

Web DOI [BibTex]

Web DOI [BibTex]


no image
Generalization Abilities of Ensemble Learning Algorithms: OLA, Bagging, Boosting

Shin, H., Jang, M., Cho, S., Lee, B., Lim, Y.

In Proc. of the Korea Information Science Conference, pages: 226-228, Conference on Korean Information Science, April 2000 (inproceedings)

[BibTex]

[BibTex]


no image
A simple iterative approach to parameter optimization

Zien, A., Zimmer, R., Lengauer, T.

In RECOMB2000, pages: 318-327, ACM Press, New York, NY, USA, Forth Annual Conference on Research in Computational Molecular Biology, April 2000 (inproceedings)

Abstract
Various bioinformatics problems require optimizing several different properties simultaneously. For example, in the protein threading problem, a linear scoring function combines the values for different properties of possible sequence-to-structure alignments into a single score to allow for unambigous optimization. In this context, an essential question is how each property should be weighted. As the native structures are known for some sequences, the implied partial ordering on optimal alignments may be used to adjust the weights. To resolve the arising interdependence of weights and computed solutions, we propose a novel approach: iterating the computation of solutions (here: threading alignments) given the weights and the estimation of optimal weights of the scoring function given these solutions via a systematic calibration method. We show that this procedure converges to structurally meaningful weights, that also lead to significantly improved performance on comprehensive test data sets as measured in different ways. The latter indicates that the performance of threading can be improved in general.

Web DOI [BibTex]

Web DOI [BibTex]


no image
Contrast discrimination using periodic pulse trains

Wichmann, F., Henning, G.

pages: 74, 3. T{\"u}binger Wahrnehmungskonferenz (TWK), February 2000 (poster)

Abstract
Understanding contrast transduction is essential for understanding spatial vision. Previous research (Wichmann et al. 1998; Wichmann, 1999; Henning and Wichmann, 1999) has demonstrated the importance of high contrasts to distinguish between alternative models of contrast discrimination. However, the modulation transfer function of the eye imposes large contrast losses on stimuli, particularly for stimuli of high spatial frequency, making high retinal contrasts difficult to obtain using sinusoidal gratings. Standard 2AFC contrast discrimination experiments were conducted using periodic pulse trains as stimuli. Given our Mitsubishi display we achieve stimuli with up to 160% contrast at the fundamental frequency. The shape of the threshold versus (pedestal) contrast (TvC) curve using pulse trains shows the characteristic dipper shape, i.e. contrast discrimination is sometimes “easier” than detection. The rising part of the TvC function has the same slope as that measured for contrast discrimination using sinusoidal gratings of the same frequency as the fundamental. Periodic pulse trains offer the possibility to explore the visual system’s properties using high retinal contrasts. Thus they might prove useful in tasks other than contrast discrimination. Second, at least for high spatial frequencies (8 c/deg) it appears that contrast discrimination using sinusoids and periodic pulse trains results in virtually identical TvC functions, indicating a lack of probability summation. Further implications of these results are discussed.

Web [BibTex]

Web [BibTex]


no image
Subliminale Darbietung verkehrsrelevanter Information in Kraftfahrzeugen

Staedtgen, M., Hahn, S., Franz, MO., Spitzer, M.

pages: 98, (Editors: H.H. Bülthoff, K.R. Gegenfurtner, H.A. Mallot), 3. T{\"u}binger Wahrnehmungskonferenz (TWK), February 2000 (poster)

Abstract
Durch moderne Bildverarbeitungstechnologien ist es m{\"o}glich, in Kraftfahrzeugen bestimmte kritische Verkehrssituationen automatisch zu erkennen und den Fahrer zu warnen bzw. zu informieren. Ein Problem ist dabei die Darbietung der Ergebnisse, die den Fahrer m{\"o}glichst wenig belasten und seine Aufmerksamkeit nicht durch zus{\"a}tzliche Warnleuchten oder akustische Signale vom Verkehrsgeschehen ablenken soll. In einer Reihe von Experimenten wurde deshalb untersucht, ob subliminal dargebotene, das heißt nicht bewußt wahrgenommene, verkehrsrelevante Informationen verhaltenswirksam werden und zur Informations{\"u}bermittlung an den Fahrer genutzt werden k{\"o}nnen. In einem Experiment zur semantischen Bahnung konnte mit Hilfe einer lexikalischen Entscheidungsaufgabe gezeigt werden, daß auf den Straßenverkehr bezogene Worte schneller verarbeitet werden, wenn vorher ein damit in Zusammenhang stehendes Bild eines Verkehrsschildes subliminal pr{\"a}sentiert wurde. Auch bei parafovealer Darbietung der subliminalen Stimuli wurde eine Beschleunigung erzielt. In einer visuellen Suchaufgabe wurden in Bildern realer Verkehrssituationen Verkehrszeichen schneller entdeckt, wenn das Bild des Verkehrszeichens vorher subliminal dargeboten wurde. In beiden Experimenten betrug die Pr{\"a}sentationszeit f{\"u}r die Hinweisreize 17 ms, zus{\"a}tzlich wurde durch Vorw{\"a}rts- und R{\"u}ckw{\"a}rtsmaskierung die bewußteWahrnehmung verhindert. Diese Laboruntersuchungen zeigten, daß sich auch im Kontext des Straßenverkehrs Beschleunigungen der Informationsverarbeitung durch subliminal dargebotene Stimuli erreichen lassen. In einem dritten Experiment wurde die Darbietung eines subliminalen Hinweisreizes auf die Reaktionszeit beim Bremsen in einem realen Fahrversuch untersucht. Die Versuchspersonen (n=17) sollten so schnell wie m{\"o}glich bremsen, wenn die Bremsleuchten eines im Abstand von 12-15 m voran fahrenden Fahrzeuges aufleuchteten. In 50 von insgesamt 100 Durchg{\"a}ngen wurde ein subliminaler Stimulus (zwei rote Punkte mit einem Zentimeter Durchmesser und zehn Zentimeter Abstand) 150 ms vor Aufleuchten der Bremslichter pr{\"a}sentiert. Die Darbietung erfolgte durch ein im Auto an Stelle des Tachometers integriertes TFT-LCD Display. Im Vergleich zur Reaktion ohne subliminalen Stimulus verk{\"u}rzte sich die Reaktionszeit dadurch signifikant um 51 ms. In den beschriebenen Experimenten konnte gezeigt werden, daß die subliminale Darbietung verkehrsrelevanter Information auch in Kraftfahrzeugen verhaltenswirksam werden kann. In Zukunft k{\"o}nnte durch die Kombination der online-Bildverarbeitung im Kraftfahrzeug mit subliminaler Darbietung der Ergebnisse eine Erh{\"o}hung der Verkehrssicherheit und des Komforts erreicht werden.

Web [BibTex]

Web [BibTex]


no image
Statistical Learning and Kernel Methods

Schölkopf, B.

In CISM Courses and Lectures, International Centre for Mechanical Sciences Vol.431, CISM Courses and Lectures, International Centre for Mechanical Sciences, 431(23):3-24, (Editors: G Della Riccia and H-J Lenz and R Kruse), Springer, Vienna, Data Fusion and Perception, 2000 (inbook)

[BibTex]

[BibTex]


no image
Bounds on Error Expectation for Support Vector Machines

Vapnik, V., Chapelle, O.

Neural Computation, 12(9):2013-2036, 2000 (article)

Abstract
We introduce the concept of span of support vectors (SV) and show that the generalization ability of support vector machines (SVM) depends on this new geometrical concept. We prove that the value of the span is always smaller (and can be much smaller) than the diameter of the smallest sphere containing th e support vectors, used in previous bounds. We also demonstate experimentally that the prediction of the test error given by the span is very accurate and has direct application in model selection (choice of the optimal parameters of the SVM)

GZIP [BibTex]

GZIP [BibTex]


no image
Intelligence as a Complex System

Zhou, D.

Biologische Kybernetik, 2000 (phdthesis)

[BibTex]

[BibTex]


no image
Neural Networks in Robot Control

Peters, J.

Biologische Kybernetik, Fernuniversität Hagen, Hagen, Germany, 2000 (diplomathesis)

[BibTex]

[BibTex]


no image
Bayesian modelling of fMRI time series

, PADFR., Rasmussen, CE., Hansen, LK.

In pages: 754-760, (Editors: Sara A. Solla, Todd K. Leen and Klaus-Robert Müller), 2000 (inproceedings)

Abstract
We present a Hidden Markov Model (HMM) for inferring the hidden psychological state (or neural activity) during single trial fMRI activation experiments with blocked task paradigms. Inference is based on Bayesian methodology, using a combination of analytical and a variety of Markov Chain Monte Carlo (MCMC) sampling techniques. The advantage of this method is that detection of short time learning effects between repeated trials is possible since inference is based only on single trial experiments.

PDF PostScript [BibTex]

PDF PostScript [BibTex]


no image
An Introduction to Kernel-Based Learning Algorithms

Müller, K., Mika, S., Rätsch, G., Tsuda, K., Schölkopf, B.

In Handbook of Neural Network Signal Processing, 4, (Editors: Yu Hen Hu and Jang-Neng Hwang), CRC Press, 2000 (inbook)

[BibTex]

[BibTex]


no image
Choosing nu in support vector regression with different noise models — theory and experiments

Chalimourda, A., Schölkopf, B., Smola, A.

In Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks, IJCNN 2000, Neural Computing: New Challenges and Perspectives for the New Millennium, IEEE, International Joint Conference on Neural Networks, 2000 (inproceedings)

[BibTex]

[BibTex]


no image
A High Resolution and Accurate Pentium Based Timer

Ong, CS., Wong, F., Lai, WK.

In 2000 (inproceedings)

PDF [BibTex]

PDF [BibTex]


no image
Robust Ensemble Learning for Data Mining

Rätsch, G., Schölkopf, B., Smola, A., Mika, S., Onoda, T., Müller, K.

In Fourth Pacific-Asia Conference on Knowledge Discovery and Data Mining, 1805, pages: 341-341, Lecture Notes in Artificial Intelligence, (Editors: H. Terano), Fourth Pacific-Asia Conference on Knowledge Discovery and Data Mining, 2000 (inproceedings)

[BibTex]

[BibTex]


no image
Sparse greedy matrix approximation for machine learning.

Smola, A., Schölkopf, B.

In 17th International Conference on Machine Learning, Stanford, 2000, pages: 911-918, (Editors: P Langley), Morgan Kaufman, San Fransisco, CA, USA, 17th International Conference on Machine Learning (ICML), 2000 (inproceedings)

[BibTex]

[BibTex]


no image
The Kernel Trick for Distances

Schölkopf, B.

(MSR-TR-2000-51), Microsoft Research, Redmond, WA, USA, 2000 (techreport)

Abstract
A method is described which, like the kernel trick in support vector machines (SVMs), lets us generalize distance-based algorithms to operate in feature spaces, usually nonlinearly related to the input space. This is done by identifying a class of kernels which can be represented as normbased distances in Hilbert spaces. It turns out that common kernel algorithms, such as SVMs and kernel PCA, are actually really distance based algorithms and can be run with that class of kernels, too. As well as providing a useful new insight into how these algorithms work, the present work can form the basis for conceiving new algorithms.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Entropy Numbers of Linear Function Classes.

Williamson, R., Smola, A., Schölkopf, B.

In 13th Annual Conference on Computational Learning Theory, pages: 309-319, (Editors: N Cesa-Bianchi and S Goldman), Morgan Kaufman, San Fransisco, CA, USA, 13th Annual Conference on Computational Learning Theory (COLT), 2000 (inproceedings)

[BibTex]

[BibTex]


no image
Kernel method for percentile feature extraction

Schölkopf, B., Platt, J., Smola, A.

(MSR-TR-2000-22), Microsoft Research, 2000 (techreport)

Abstract
A method is proposed which computes a direction in a dataset such that a speci􏰘ed fraction of a particular class of all examples is separated from the overall mean by a maximal margin􏰤 The pro jector onto that direction can be used for class􏰣speci􏰘c feature extraction􏰤 The algorithm is carried out in a feature space associated with a support vector kernel function􏰢 hence it can be used to construct a large class of nonlinear fea􏰣 ture extractors􏰤 In the particular case where there exists only one class􏰢 the method can be thought of as a robust form of principal component analysis􏰢 where instead of variance we maximize percentile thresholds􏰤 Fi􏰣 nally􏰢 we generalize it to also include the possibility of specifying negative examples􏰤

PDF [BibTex]

PDF [BibTex]

1999


no image
Some Aspects of Modelling Human Spatial Vision: Contrast Discrimination

Wichmann, F.

University of Oxford, University of Oxford, October 1999 (phdthesis)

[BibTex]

1999

[BibTex]


no image
Engineering Support Vector Machine Kernels That Recognize Translation Initiation Sites in DNA

Zien, A., Rätsch, G., Mika, S., Schölkopf, B., Lemmen, C., Smola, A., Lengauer, T., Müller, K.

In German Conference on Bioinformatics (GCB 1999), October 1999 (inproceedings)

Abstract
In order to extract protein sequences from nucleotide sequences, it is an important step to recognize points from which regions encoding pro­ teins start, the so­called translation initiation sites (TIS). This can be modeled as a classification prob­ lem. We demonstrate the power of support vector machines (SVMs) for this task, and show how to suc­ cessfully incorporate biological prior knowledge by engineering an appropriate kernel function.

Web [BibTex]

Web [BibTex]


no image
Unexpected and anticipated pain: identification of specific brain activations by correlation with reference functions derived form conditioning theory

Ploghaus, A., Clare, S., Wichmann, F., Tracey, I.

29, 29th Annual Meeting of the Society for Neuroscience (Neuroscience), October 1999 (poster)

[BibTex]

[BibTex]


no image
Lernen mit Kernen: Support-Vektor-Methoden zur Analyse hochdimensionaler Daten

Schölkopf, B., Müller, K., Smola, A.

Informatik - Forschung und Entwicklung, 14(3):154-163, September 1999 (article)

Abstract
We describe recent developments and results of statistical learning theory. In the framework of learning from examples, two factors control generalization ability: explaining the training data by a learning machine of a suitable complexity. We describe kernel algorithms in feature spaces as elegant and efficient methods of realizing such machines. Examples thereof are Support Vector Machines (SVM) and Kernel PCA (Principal Component Analysis). More important than any individual example of a kernel algorithm, however, is the insight that any algorithm that can be cast in terms of dot products can be generalized to a nonlinear setting using kernels. Finally, we illustrate the significance of kernel algorithms by briefly describing industrial and academic applications, including ones where we obtained benchmark record results.

PDF PDF DOI [BibTex]

PDF PDF DOI [BibTex]


no image
Input space versus feature space in kernel-based methods

Schölkopf, B., Mika, S., Burges, C., Knirsch, P., Müller, K., Rätsch, G., Smola, A.

IEEE Transactions On Neural Networks, 10(5):1000-1017, September 1999 (article)

Abstract
This paper collects some ideas targeted at advancing our understanding of the feature spaces associated with support vector (SV) kernel functions. We first discuss the geometry of feature space. In particular, we review what is known about the shape of the image of input space under the feature space map, and how this influences the capacity of SV methods. Following this, we describe how the metric governing the intrinsic geometry of the mapped surface can be computed in terms of the kernel, using the example of the class of inhomogeneous polynomial kernels, which are often used in SV pattern recognition. We then discuss the connection between feature space and input space by dealing with the question of how one can, given some vector in feature space, find a preimage (exact or approximate) in input space. We describe algorithms to tackle this issue, and show their utility in two applications of kernel methods. First, we use it to reduce the computational complexity of SV decision functions; second, we combine it with the kernel PCA algorithm, thereby constructing a nonlinear statistical denoising technique which is shown to perform well on real-world data.

Web DOI [BibTex]

Web DOI [BibTex]


no image
p73 and p63 are homotetramers capable of weak heterotypic interactions with each other but not with p53.

Davison, T., Vagner, C., Kaghad, M., Ayed, A., Caput, D., CH, ..

Journal of Biological Chemistry, 274(26):18709-18714, June 1999 (article)

Abstract
Mutations in the p53 tumor suppressor gene are the most frequent genetic alterations found in human cancers. Recent identification of two human homologues of p53 has raised the prospect of functional interactions between family members via a conserved oligomerization domain. Here we report in vitro and in vivo analysis of homo- and hetero-oligomerization of p53 and its homologues, p63 and p73. The oligomerization domains of p63 and p73 can independently fold into stable homotetramers, as previously observed for p53. However, the oligomerization domain of p53 does not associate with that of either p73 or p63, even when p53 is in 15-fold excess. On the other hand, the oligomerization domains of p63 and p73 are able to weakly associate with one another in vitro. In vivo co-transfection assays of the ability of p53 and its homologues to activate reporter genes showed that a DNA-binding mutant of p53 was not able to act in a dominant negative manner over wild-type p73 or p63 but that a p73 mutant could inhibit the activity of wild-type p63. These data suggest that mutant p53 in cancer cells will not interact with endogenous or exogenous p63 or p73 via their respective oligomerization domains. It also establishes that the multiple isoforms of p63 as well as those of p73 are capable of interacting via their common oligomerization domain.

Web [BibTex]

Web [BibTex]


no image
Shrinking the tube: a new support vector regression algorithm

Schölkopf, B., Bartlett, P., Smola, A., Williamson, R.

In Advances in Neural Information Processing Systems 11, pages: 330-336 , (Editors: MS Kearns and SA Solla and DA Cohn), MIT Press, Cambridge, MA, USA, 12th Annual Conference on Neural Information Processing Systems (NIPS), June 1999 (inproceedings)

PDF Web [BibTex]

PDF Web [BibTex]


no image
Semiparametric support vector and linear programming machines

Smola, A., Friess, T., Schölkopf, B.

In Advances in Neural Information Processing Systems 11, pages: 585-591 , (Editors: MS Kearns and SA Solla and DA Cohn), MIT Press, Cambridge, MA, USA, Twelfth Annual Conference on Neural Information Processing Systems (NIPS), June 1999 (inproceedings)

Abstract
Semiparametric models are useful tools in the case where domain knowledge exists about the function to be estimated or emphasis is put onto understandability of the model. We extend two learning algorithms - Support Vector machines and Linear Programming machines to this case and give experimental results for SV machines.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Kernel PCA and De-noising in feature spaces

Mika, S., Schölkopf, B., Smola, A., Müller, K., Scholz, M., Rätsch, G.

In Advances in Neural Information Processing Systems 11, pages: 536-542 , (Editors: MS Kearns and SA Solla and DA Cohn), MIT Press, Cambridge, MA, USA, 12th Annual Conference on Neural Information Processing Systems (NIPS), June 1999 (inproceedings)

Abstract
Kernel PCA as a nonlinear feature extractor has proven powerful as a preprocessing step for classification algorithms. But it can also be considered as a natural generalization of linear principal component analysis. This gives rise to the question how to use nonlinear features for data compression, reconstruction, and de-noising, applications common in linear PCA. This is a nontrivial task, as the results provided by kernel PCA live in some high dimensional feature space and need not have pre-images in input space. This work presents ideas for finding approximate pre-images, focusing on Gaussian kernels, and shows experimental results using these pre-images in data reconstruction and de-noising on toy examples as well as on real world data.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Kernel principal component analysis.

Schölkopf, B., Smola, A., Müller, K.

In Advances in Kernel Methods—Support Vector Learning, pages: 327-352, (Editors: B Schölkopf and CJC Burges and AJ Smola), MIT Press, Cambridge, MA, 1999 (inbook)

[BibTex]

[BibTex]


no image
Estimating the support of a high-dimensional distribution

Schölkopf, B., Platt, J., Shawe-Taylor, J., Smola, A., Williamson, R.

(MSR-TR-99-87), Microsoft Research, 1999 (techreport)

Web [BibTex]

Web [BibTex]


no image
Single-class Support Vector Machines

Schölkopf, B., Williamson, R., Smola, A., Shawe-Taylor, J.

Dagstuhl-Seminar on Unsupervised Learning, pages: 19-20, (Editors: J. Buhmann, W. Maass, H. Ritter and N. Tishby), 1999 (poster)

[BibTex]

[BibTex]


no image
Spatial Learning and Localization in Animals: A Computational Model and Its Implications for Mobile Robots

Balakrishnan, K., Bousquet, O., Honavar, V.

Adaptive Behavior, 7(2):173-216, 1999 (article)

[BibTex]


no image
SVMs for Histogram Based Image Classification

Chapelle, O., Haffner, P., Vapnik, V.

IEEE Transactions on Neural Networks, (9), 1999 (article)

Abstract
Traditional classification approaches generalize poorly on image classification tasks, because of the high dimensionality of the feature space. This paper shows that Support Vector Machines (SVM) can generalize well on difficult image classification problems where the only features are high dimensional histograms. Heavy-tailed RBF kernels of the form $K(mathbf{x},mathbf{y})=e^{-rhosum_i |x_i^a-y_i^a|^{b}}$ with $aleq 1$ and $b leq 2$ are evaluated on the classification of images extracted from the Corel Stock Photo Collection and shown to far outperform traditional polynomial or Gaussian RBF kernels. Moreover, we observed that a simple remapping of the input $x_i rightarrow x_i^a$ improves the performance of linear SVMs to such an extend that it makes them, for this problem, a valid alternative to RBF kernels.

GZIP [BibTex]

GZIP [BibTex]


no image
Classifying LEP data with support vector algorithms.

Vannerem, P., Müller, K., Smola, A., Schölkopf, B., Söldner-Rembold, S.

In Artificial Intelligence in High Energy Nuclear Physics 99, Artificial Intelligence in High Energy Nuclear Physics 99, 1999 (inproceedings)

[BibTex]

[BibTex]


no image
Generalization Bounds via Eigenvalues of the Gram matrix

Schölkopf, B., Shawe-Taylor, J., Smola, A., Williamson, R.

(99-035), NeuroCOLT, 1999 (techreport)

[BibTex]

[BibTex]


no image
Pedestal effects with periodic pulse trains

Henning, G., Wichmann, F.

Perception, 28, pages: S137, 1999 (poster)

Abstract
It is important to know for theoretical reasons how performance varies with stimulus contrast. But, for objects on CRT displays, retinal contrast is limited by the linear range of the display and the modulation transfer function of the eye. For example, with an 8 c/deg sinusoidal grating at 90% contrast, the contrast of the retinal image is barely 45%; more retinal contrast is required, however, to discriminate among theories of contrast discrimination (Wichmann, Henning and Ploghaus, 1998). The stimulus with the greatest contrast at any spatial-frequency component is a periodic pulse train which has 200% contrast at every harmonic. Such a waveform cannot, of course, be produced; the best we can do with our Mitsubishi display provides a contrast of 150% at an 8-c/deg fundamental thus producing a retinal image with about 75% contrast. The penalty of using this stimulus is that the 2nd harmonic of the retinal image also has high contrast (with an emmetropic eye, more than 60% of the contrast of the 8-c/deg fundamental ) and the mean luminance is not large (24.5 cd/m2 on our display). We have used standard 2-AFC experiments to measure the detectability of an 8-c/deg pulse train against the background of an identical pulse train of different contrasts. An unusually large improvement in detetectability was measured, the pedestal effect or "dipper," and the dipper was unusually broad. The implications of these results will be discussed.

[BibTex]

[BibTex]


no image
Apprentissage Automatique et Simplicite

Bousquet, O.

Biologische Kybernetik, 1999, In french (diplomathesis)

PostScript [BibTex]

PostScript [BibTex]


no image
Classification on proximity data with LP-machines

Graepel, T., Herbrich, R., Schölkopf, B., Smola, A., Bartlett, P., Müller, K., Obermayer, K., Williamson, R.

In Artificial Neural Networks, 1999. ICANN 99, 470, pages: 304-309, Conference Publications , IEEE, 9th International Conference on Artificial Neural Networks, 1999 (inproceedings)

DOI [BibTex]

DOI [BibTex]


no image
Kernel-dependent support vector error bounds

Schölkopf, B., Shawe-Taylor, J., Smola, A., Williamson, R.

In Artificial Neural Networks, 1999. ICANN 99, 470, pages: 103-108 , Conference Publications , IEEE, 9th International Conference on Artificial Neural Networks, 1999 (inproceedings)

DOI [BibTex]

DOI [BibTex]