Header logo is ei


2000


no image
A real-time model of the human knee for application in virtual orthopaedic trainer

Peters, J., Riener, R.

In Proceedings of the 10th International Conference on BioMedical Engineering (ICBME 2000), 10, pages: 1-2, 10th International Conference on BioMedical Engineering (ICBME) , December 2000 (inproceedings)

Abstract
In this paper a real-time capable computational model of the human knee is presented. The model describes the passive elastic joint characteristics in six degrees-of-freedom (DOF). A black-box approach was chosen, where experimental data were approximated by piecewise polynomial functions. The knee model has been applied in a the Virtual Orthopaedic Trainer, which can support training of physical knee evaluation required for diagnosis and surgical planning.

PDF Web [BibTex]

2000

PDF Web [BibTex]


no image
On Designing an Automated Malaysian Stemmer for the Malay Language

Tai, SY., Ong, CS., Abullah, NA.

In Fifth International Workshop on Information Retrieval with Asian Languages, pages: 207-208, ACM Press, New York, NY, USA, Fifth International Workshop on Information Retrieval with Asian Languages, October 2000 (inproceedings)

Abstract
Online and interactive information retrieval systems are likely to play an increasing role in the Malay Language community. To facilitate and automate the process of matching morphological term variants, a stemmer focusing on common affix removal algorithms is proposed as part of the design of an information retrieval system for the Malay Language. Stemming is a morphological process of normalizing word tokens down to their essential roots. The proposed stemmer strips prefixes and suffixes off the word. The experiment conducted with web sites selected from the World Wide Web has exhibited substantial improvements in the number of words indexed.

PostScript Web DOI [BibTex]

PostScript Web DOI [BibTex]


no image
Robust ensemble learning

Rätsch, G., Schölkopf, B., Smola, A., Mika, S., Onoda, T., Müller, K.

In Advances in Large Margin Classifiers, pages: 207-220, Neural Information Processing Series, (Editors: AJ Smola and PJ Bartlett and B Schölkopf and D. Schuurmans), MIT Press, Cambridge, MA, USA, October 2000 (inbook)

[BibTex]

[BibTex]


no image
Entropy numbers for convex combinations and MLPs

Smola, A., Elisseeff, A., Schölkopf, B., Williamson, R.

In Advances in Large Margin Classifiers, pages: 369-387, Neural Information Processing Series, (Editors: AJ Smola and PL Bartlett and B Schölkopf and D Schuurmans), MIT Press, Cambridge, MA,, October 2000 (inbook)

[BibTex]

[BibTex]


no image
Ensemble of Specialized Networks based on Input Space Partition

Shin, H., Lee, H., Cho, S.

In Proc. of the Korean Operations Research and Management Science Conference, pages: 33-36, Korean Operations Research and Management Science Conference, October 2000 (inproceedings)

[BibTex]

[BibTex]


no image
DES Approach Failure Recovery of Pump-valve System

Son, HI., Kim, KW., Lee, S.

In Korean Society of Precision Engineering (KSPE) Conference, pages: 647-650, Annual Meeting of the Korean Society of Precision Engineering (KSPE), October 2000 (inproceedings)

PDF [BibTex]

PDF [BibTex]


no image
Natural Regularization from Generative Models

Oliver, N., Schölkopf, B., Smola, A.

In Advances in Large Margin Classifiers, pages: 51-60, Neural Information Processing Series, (Editors: AJ Smola and PJ Bartlett and B Schölkopf and D Schuurmans), MIT Press, Cambridge, MA, USA, October 2000 (inbook)

[BibTex]

[BibTex]


no image
Ensemble Learning Algorithm of Specialized Networks

Shin, H., Lee, H., Cho, S.

In Proc. of the Korea Information Science Conference, pages: 308-310, Korea Information Science Conference, October 2000 (inproceedings)

[BibTex]

[BibTex]


no image
DES Approach Failure Diagnosis of Pump-valve System

Son, HI., Kim, KW., Lee, S.

In Korean Society of Precision Engineering (KSPE) Conference, pages: 643-646, Annual Meeting of the Korean Society of Precision Engineering (KSPE), October 2000 (inproceedings)

Abstract
As many industrial systems become more complex, it becomes extremely difficult to diagnose the cause of failures. This paper presents a failure diagnosis approach based on discrete event system theory. In particular, the approach is a hybrid of event-based and state-based ones leading to a simpler failure diagnoser with supervisory control capability. The design procedure is presented along with a pump-valve system as an example.

PDF [BibTex]

PDF [BibTex]


no image
Analysis of Gene Expression Data with Pathway Scores

Zien, A., Küffner, R., Zimmer, R., Lengauer, T.

In ISMB 2000, pages: 407-417, AAAI Press, Menlo Park, CA, USA, 8th International Conference on Intelligent Systems for Molecular Biology, August 2000 (inproceedings)

Abstract
We present a new approach for the evaluation of gene expression data. The basic idea is to generate biologically possible pathways and to score them with respect to gene expression measurements. We suggest sample scoring functions for different problem specifications. The significance of the scores for the investigated pathways is assessed by comparison to a number of scores for random pathways. We show that simple scoring functions can assign statistically significant scores to biologically relevant pathways. This suggests that the combination of appropriate scoring functions with the systematic generation of pathways can be used in order to select the most interesting pathways based on gene expression measurements.

PDF [BibTex]

PDF [BibTex]


no image
Observational Learning with Modular Networks

Shin, H., Lee, H., Cho, S.

In Lecture Notes in Computer Science (LNCS 1983), LNCS 1983, pages: 126-132, Springer-Verlag, Heidelberg, International Conference on Intelligent Data Engineering and Automated Learning (IDEAL), July 2000 (inproceedings)

Abstract
Observational learning algorithm is an ensemble algorithm where each network is initially trained with a bootstrapped data set and virtual data are generated from the ensemble for training. Here we propose a modular OLA approach where the original training set is partitioned into clusters and then each network is instead trained with one of the clusters. Networks are combined with different weighting factors now that are inversely proportional to the distance from the input vector to the cluster centers. Comparison with bagging and boosting shows that the proposed approach reduces generalization error with a smaller number of networks employed.

PDF [BibTex]

PDF [BibTex]


no image
The Infinite Gaussian Mixture Model

Rasmussen, CE.

In Advances in Neural Information Processing Systems 12, pages: 554-560, (Editors: Solla, S.A. , T.K. Leen, K-R Müller), MIT Press, Cambridge, MA, USA, Thirteenth Annual Neural Information Processing Systems Conference (NIPS), June 2000 (inproceedings)

Abstract
In a Bayesian mixture model it is not necessary a priori to limit the number of components to be finite. In this paper an infinite Gaussian mixture model is presented which neatly sidesteps the difficult problem of finding the ``right'' number of mixture components. Inference in the model is done using an efficient parameter-free Markov Chain that relies entirely on Gibbs sampling.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Generalization Abilities of Ensemble Learning Algorithms

Shin, H., Jang, M., Cho, S.

In Proc. of the Korean Brain Society Conference, pages: 129-133, Korean Brain Society Conference, June 2000 (inproceedings)

[BibTex]

[BibTex]


no image
Support vector method for novelty detection

Schölkopf, B., Williamson, R., Smola, A., Shawe-Taylor, J., Platt, J.

In Advances in Neural Information Processing Systems 12, pages: 582-588, (Editors: SA Solla and TK Leen and K-R Müller), MIT Press, Cambridge, MA, USA, 13th Annual Neural Information Processing Systems Conference (NIPS), June 2000 (inproceedings)

Abstract
Suppose you are given some dataset drawn from an underlying probability distribution ¤ and you want to estimate a “simple” subset ¥ of input space such that the probability that a test point drawn from ¤ lies outside of ¥ equals some a priori specified ¦ between § and ¨. We propose a method to approach this problem by trying to estimate a function © which is positive on ¥ and negative on the complement. The functional form of © is given by a kernel expansion in terms of a potentially small subset of the training data; it is regularized by controlling the length of the weight vector in an associated feature space. We provide a theoretical analysis of the statistical performance of our algorithm. The algorithm is a natural extension of the support vector algorithm to the case of unlabelled data.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Solving Satisfiability Problems with Genetic Algorithms

Harmeling, S.

In Genetic Algorithms and Genetic Programming at Stanford 2000, pages: 206-213, (Editors: Koza, J. R.), Stanford Bookstore, Stanford, CA, USA, June 2000 (inbook)

Abstract
We show how to solve hard 3-SAT problems using genetic algorithms. Furthermore, we explore other genetic operators that may be useful to tackle 3-SAT problems, and discuss their pros and cons.

PDF [BibTex]

PDF [BibTex]


no image
v-Arc: Ensemble Learning in the Presence of Outliers

Rätsch, G., Schölkopf, B., Smola, A., Müller, K., Onoda, T., Mika, S.

In Advances in Neural Information Processing Systems 12, pages: 561-567, (Editors: SA Solla and TK Leen and K-R Müller), MIT Press, Cambridge, MA, USA, 13th Annual Neural Information Processing Systems Conference (NIPS), June 2000 (inproceedings)

Abstract
AdaBoost and other ensemble methods have successfully been applied to a number of classification tasks, seemingly defying problems of overfitting. AdaBoost performs gradient descent in an error function with respect to the margin, asymptotically concentrating on the patterns which are hardest to learn. For very noisy problems, however, this can be disadvantageous. Indeed, theoretical analysis has shown that the margin distribution, as opposed to just the minimal margin, plays a crucial role in understanding this phenomenon. Loosely speaking, some outliers should be tolerated if this has the benefit of substantially increasing the margin on the remaining points. We propose a new boosting algorithm which allows for the possibility of a pre-specified fraction of points to lie in the margin area or even on the wrong side of the decision boundary.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Invariant feature extraction and classification in kernel spaces

Mika, S., Rätsch, G., Weston, J., Schölkopf, B., Smola, A., Müller, K.

In Advances in neural information processing systems 12, pages: 526-532, (Editors: SA Solla and TK Leen and K-R Müller), MIT Press, Cambridge, MA, USA, 13th Annual Neural Information Processing Systems Conference (NIPS), June 2000 (inproceedings)

PDF Web [BibTex]

PDF Web [BibTex]


no image
Transductive Inference for Estimating Values of Functions

Chapelle, O., Vapnik, V., Weston, J.

In Advances in Neural Information Processing Systems 12, pages: 421-427, (Editors: Solla, S.A. , T.K. Leen, K-R Müller), MIT Press, Cambridge, MA, USA, Thirteenth Annual Neural Information Processing Systems Conference (NIPS), June 2000 (inproceedings)

Abstract
We introduce an algorithm for estimating the values of a function at a set of test points $x_1^*,dots,x^*_m$ given a set of training points $(x_1,y_1),dots,(x_ell,y_ell)$ without estimating (as an intermediate step) the regression function. We demonstrate that this direct (transductive) way for estimating values of the regression (or classification in pattern recognition) is more accurate than the traditional one based on two steps, first estimating the function and then calculating the values of this function at the points of interest.

PDF Web [BibTex]

PDF Web [BibTex]


no image
The entropy regularization information criterion

Smola, A., Shawe-Taylor, J., Schölkopf, B., Williamson, R.

In Advances in Neural Information Processing Systems 12, pages: 342-348, (Editors: SA Solla and TK Leen and K-R Müller), MIT Press, Cambridge, MA, USA, 13th Annual Neural Information Processing Systems Conference (NIPS), June 2000 (inproceedings)

Abstract
Effective methods of capacity control via uniform convergence bounds for function expansions have been largely limited to Support Vector machines, where good bounds are obtainable by the entropy number approach. We extend these methods to systems with expansions in terms of arbitrary (parametrized) basis functions and a wide range of regularization methods covering the whole range of general linear additive models. This is achieved by a data dependent analysis of the eigenvalues of the corresponding design matrix.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Model Selection for Support Vector Machines

Chapelle, O., Vapnik, V.

In Advances in Neural Information Processing Systems 12, pages: 230-236, (Editors: Solla, S.A. , T.K. Leen, K-R Müller), MIT Press, Cambridge, MA, USA, Thirteenth Annual Neural Information Processing Systems Conference (NIPS), June 2000 (inproceedings)

Abstract
New functionals for parameter (model) selection of Support Vector Machines are introduced based on the concepts of the span of support vectors and rescaling of the feature space. It is shown that using these functionals, one can both predict the best choice of parameters of the model and the relative quality of performance for any value of parameter.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Generalization Abilities of Ensemble Learning Algorithms: OLA, Bagging, Boosting

Shin, H., Jang, M., Cho, S., Lee, B., Lim, Y.

In Proc. of the Korea Information Science Conference, pages: 226-228, Conference on Korean Information Science, April 2000 (inproceedings)

[BibTex]

[BibTex]


no image
A simple iterative approach to parameter optimization

Zien, A., Zimmer, R., Lengauer, T.

In RECOMB2000, pages: 318-327, ACM Press, New York, NY, USA, Forth Annual Conference on Research in Computational Molecular Biology, April 2000 (inproceedings)

Abstract
Various bioinformatics problems require optimizing several different properties simultaneously. For example, in the protein threading problem, a linear scoring function combines the values for different properties of possible sequence-to-structure alignments into a single score to allow for unambigous optimization. In this context, an essential question is how each property should be weighted. As the native structures are known for some sequences, the implied partial ordering on optimal alignments may be used to adjust the weights. To resolve the arising interdependence of weights and computed solutions, we propose a novel approach: iterating the computation of solutions (here: threading alignments) given the weights and the estimation of optimal weights of the scoring function given these solutions via a systematic calibration method. We show that this procedure converges to structurally meaningful weights, that also lead to significantly improved performance on comprehensive test data sets as measured in different ways. The latter indicates that the performance of threading can be improved in general.

Web DOI [BibTex]

Web DOI [BibTex]


no image
Statistical Learning and Kernel Methods

Schölkopf, B.

In CISM Courses and Lectures, International Centre for Mechanical Sciences Vol.431, CISM Courses and Lectures, International Centre for Mechanical Sciences, 431(23):3-24, (Editors: G Della Riccia and H-J Lenz and R Kruse), Springer, Vienna, Data Fusion and Perception, 2000 (inbook)

[BibTex]

[BibTex]


no image
Bayesian modelling of fMRI time series

, PADFR., Rasmussen, CE., Hansen, LK.

In pages: 754-760, (Editors: Sara A. Solla, Todd K. Leen and Klaus-Robert Müller), 2000 (inproceedings)

Abstract
We present a Hidden Markov Model (HMM) for inferring the hidden psychological state (or neural activity) during single trial fMRI activation experiments with blocked task paradigms. Inference is based on Bayesian methodology, using a combination of analytical and a variety of Markov Chain Monte Carlo (MCMC) sampling techniques. The advantage of this method is that detection of short time learning effects between repeated trials is possible since inference is based only on single trial experiments.

PDF PostScript [BibTex]

PDF PostScript [BibTex]


no image
An Introduction to Kernel-Based Learning Algorithms

Müller, K., Mika, S., Rätsch, G., Tsuda, K., Schölkopf, B.

In Handbook of Neural Network Signal Processing, 4, (Editors: Yu Hen Hu and Jang-Neng Hwang), CRC Press, 2000 (inbook)

[BibTex]

[BibTex]


no image
Choosing nu in support vector regression with different noise models — theory and experiments

Chalimourda, A., Schölkopf, B., Smola, A.

In Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks, IJCNN 2000, Neural Computing: New Challenges and Perspectives for the New Millennium, IEEE, International Joint Conference on Neural Networks, 2000 (inproceedings)

[BibTex]

[BibTex]


no image
A High Resolution and Accurate Pentium Based Timer

Ong, CS., Wong, F., Lai, WK.

In 2000 (inproceedings)

PDF [BibTex]

PDF [BibTex]


no image
Robust Ensemble Learning for Data Mining

Rätsch, G., Schölkopf, B., Smola, A., Mika, S., Onoda, T., Müller, K.

In Fourth Pacific-Asia Conference on Knowledge Discovery and Data Mining, 1805, pages: 341-341, Lecture Notes in Artificial Intelligence, (Editors: H. Terano), Fourth Pacific-Asia Conference on Knowledge Discovery and Data Mining, 2000 (inproceedings)

[BibTex]

[BibTex]


no image
Sparse greedy matrix approximation for machine learning.

Smola, A., Schölkopf, B.

In 17th International Conference on Machine Learning, Stanford, 2000, pages: 911-918, (Editors: P Langley), Morgan Kaufman, San Fransisco, CA, USA, 17th International Conference on Machine Learning (ICML), 2000 (inproceedings)

[BibTex]

[BibTex]


no image
Entropy Numbers of Linear Function Classes.

Williamson, R., Smola, A., Schölkopf, B.

In 13th Annual Conference on Computational Learning Theory, pages: 309-319, (Editors: N Cesa-Bianchi and S Goldman), Morgan Kaufman, San Fransisco, CA, USA, 13th Annual Conference on Computational Learning Theory (COLT), 2000 (inproceedings)

[BibTex]

[BibTex]

1999


no image
Engineering Support Vector Machine Kernels That Recognize Translation Initiation Sites in DNA

Zien, A., Rätsch, G., Mika, S., Schölkopf, B., Lemmen, C., Smola, A., Lengauer, T., Müller, K.

In German Conference on Bioinformatics (GCB 1999), October 1999 (inproceedings)

Abstract
In order to extract protein sequences from nucleotide sequences, it is an important step to recognize points from which regions encoding pro­ teins start, the so­called translation initiation sites (TIS). This can be modeled as a classification prob­ lem. We demonstrate the power of support vector machines (SVMs) for this task, and show how to suc­ cessfully incorporate biological prior knowledge by engineering an appropriate kernel function.

Web [BibTex]

1999

Web [BibTex]


no image
Shrinking the tube: a new support vector regression algorithm

Schölkopf, B., Bartlett, P., Smola, A., Williamson, R.

In Advances in Neural Information Processing Systems 11, pages: 330-336 , (Editors: MS Kearns and SA Solla and DA Cohn), MIT Press, Cambridge, MA, USA, 12th Annual Conference on Neural Information Processing Systems (NIPS), June 1999 (inproceedings)

PDF Web [BibTex]

PDF Web [BibTex]


no image
Semiparametric support vector and linear programming machines

Smola, A., Friess, T., Schölkopf, B.

In Advances in Neural Information Processing Systems 11, pages: 585-591 , (Editors: MS Kearns and SA Solla and DA Cohn), MIT Press, Cambridge, MA, USA, Twelfth Annual Conference on Neural Information Processing Systems (NIPS), June 1999 (inproceedings)

Abstract
Semiparametric models are useful tools in the case where domain knowledge exists about the function to be estimated or emphasis is put onto understandability of the model. We extend two learning algorithms - Support Vector machines and Linear Programming machines to this case and give experimental results for SV machines.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Kernel PCA and De-noising in feature spaces

Mika, S., Schölkopf, B., Smola, A., Müller, K., Scholz, M., Rätsch, G.

In Advances in Neural Information Processing Systems 11, pages: 536-542 , (Editors: MS Kearns and SA Solla and DA Cohn), MIT Press, Cambridge, MA, USA, 12th Annual Conference on Neural Information Processing Systems (NIPS), June 1999 (inproceedings)

Abstract
Kernel PCA as a nonlinear feature extractor has proven powerful as a preprocessing step for classification algorithms. But it can also be considered as a natural generalization of linear principal component analysis. This gives rise to the question how to use nonlinear features for data compression, reconstruction, and de-noising, applications common in linear PCA. This is a nontrivial task, as the results provided by kernel PCA live in some high dimensional feature space and need not have pre-images in input space. This work presents ideas for finding approximate pre-images, focusing on Gaussian kernels, and shows experimental results using these pre-images in data reconstruction and de-noising on toy examples as well as on real world data.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Kernel principal component analysis.

Schölkopf, B., Smola, A., Müller, K.

In Advances in Kernel Methods—Support Vector Learning, pages: 327-352, (Editors: B Schölkopf and CJC Burges and AJ Smola), MIT Press, Cambridge, MA, 1999 (inbook)

[BibTex]

[BibTex]


no image
Classifying LEP data with support vector algorithms.

Vannerem, P., Müller, K., Smola, A., Schölkopf, B., Söldner-Rembold, S.

In Artificial Intelligence in High Energy Nuclear Physics 99, Artificial Intelligence in High Energy Nuclear Physics 99, 1999 (inproceedings)

[BibTex]

[BibTex]


no image
Classification on proximity data with LP-machines

Graepel, T., Herbrich, R., Schölkopf, B., Smola, A., Bartlett, P., Müller, K., Obermayer, K., Williamson, R.

In Artificial Neural Networks, 1999. ICANN 99, 470, pages: 304-309, Conference Publications , IEEE, 9th International Conference on Artificial Neural Networks, 1999 (inproceedings)

DOI [BibTex]

DOI [BibTex]


no image
Kernel-dependent support vector error bounds

Schölkopf, B., Shawe-Taylor, J., Smola, A., Williamson, R.

In Artificial Neural Networks, 1999. ICANN 99, 470, pages: 103-108 , Conference Publications , IEEE, 9th International Conference on Artificial Neural Networks, 1999 (inproceedings)

DOI [BibTex]

DOI [BibTex]


no image
Linear programs for automatic accuracy control in regression

Smola, A., Schölkopf, B., Rätsch, G.

In Artificial Neural Networks, 1999. ICANN 99, 470, pages: 575-580 , Conference Publications , IEEE, 9th International Conference on Artificial Neural Networks, 1999 (inproceedings)

DOI [BibTex]

DOI [BibTex]


no image
Regularized principal manifolds.

Smola, A., Williamson, R., Mika, S., Schölkopf, B.

In Lecture Notes in Artificial Intelligence, Vol. 1572, 1572, pages: 214-229 , Lecture Notes in Artificial Intelligence, (Editors: P Fischer and H-U Simon), Springer, Berlin, Germany, Computational Learning Theory: 4th European Conference, 1999 (inproceedings)

[BibTex]

[BibTex]


no image
Entropy numbers, operators and support vector kernels.

Williamson, R., Smola, A., Schölkopf, B.

In Lecture Notes in Artificial Intelligence, Vol. 1572, 1572, pages: 285-299, Lecture Notes in Artificial Intelligence, (Editors: P Fischer and H-U Simon), Springer, Berlin, Germany, Computational Learning Theory: 4th European Conference, 1999 (inproceedings)

[BibTex]

[BibTex]


no image
Is the Hippocampus a Kalman Filter?

Bousquet, O., Balakrishnan, K., Honavar, V.

In Proceedings of the Pacific Symposium on Biocomputing, 3, pages: 619-630, Proceedings of the Pacific Symposium on Biocomputing, 1999 (inproceedings)

[BibTex]

[BibTex]


no image
A Comparison of Artificial Neural Networks and Cluster Analysis for Typing Biometrics Authentication

Maisuria, K., Ong, CS., Lai, .

In unknown, pages: 9999-9999, International Joint Conference on Neural Networks, 1999 (inproceedings)

PDF [BibTex]

PDF [BibTex]


no image
Entropy numbers, operators and support vector kernels.

Williamson, R., Smola, A., Schölkopf, B.

In Advances in Kernel Methods - Support Vector Learning, pages: 127-144, (Editors: B Schölkopf and CJC Burges and AJ Smola), MIT Press, Cambridge, MA, 1999 (inbook)

[BibTex]

[BibTex]


no image
Fisher discriminant analysis with kernels

Mika, S., Rätsch, G., Weston, J., Schölkopf, B., Müller, K.

In Proceedings of the 1999 IEEE Signal Processing Society Workshop, 9, pages: 41-48, (Editors: Y-H Hu and J Larsen and E Wilson and S Douglas), IEEE, Neural Networks for Signal Processing IX, 1999 (inproceedings)

DOI [BibTex]

DOI [BibTex]