Header logo is ei


2006


no image
Extensions of ICA for Causality Discovery in the Hong Kong Stock Market

Zhang, K., Chan, L.

In Neural Information Processing, 13th International Conference, ICONIP 2006, pages: 400-409, (Editors: I King and J Wang and L Chan and D L Wang), Springer, 13th International Conference on Neural Information Processing (ICONIP), 2006, Lecture Notes in Computer Science, 2006, Volume 4234/2006 (inproceedings)

Web DOI [BibTex]

2006

Web DOI [BibTex]


no image
Enhancement of source independence for blind source separation

Zhang, K., Chan, L.

In Independent Component Analysis and Blind Signal Separation, LNCS 3889, pages: 731-738, (Editors: J. Rosca and D. Erdogmus and JC Príncipe und S. Haykin), Springer, Berlin, Germany, 6th International Conference on Independent Component Analysis and Blind Signal Separation (ICA), 2006, Lecture Notes in Computer Science, 2006, Volume 3889/2006 (inproceedings)

Web DOI [BibTex]

Web DOI [BibTex]


no image
Semigroups applied to transport and queueing processes

Radl, A.

Biologische Kybernetik, Eberhard Karls Universität, Tübingen, 2006 (phdthesis)

PDF [BibTex]

PDF [BibTex]


no image
Apparatus for Inspecting Alignment Film of Liquid Crystal Display and Method Thereof

Park, MW., Son, HI., Kim, SJ., Kim, KI., Yang, JW.

Max-Planck-Gesellschaft, Biologische Kybernetik, 2006 (patent)

[BibTex]

[BibTex]


no image
ICA with Sparse Connections

Zhang, K., Chan, L.

In Intelligent Data Engineering and Automated Learning – IDEAL 2006, pages: 530-537, (Editors: E Corchado and H Yin and V Botti und Colin Fyfe), Springer, 7th International Conference on Intelligent Data Engineering and Automated Learning (IDEAL), 2006, Lecture Notes in Computer Science, 2006, Volume 4224/2006 (inproceedings)

Web DOI [BibTex]

Web DOI [BibTex]


no image
Classification of natural scenes: critical features revisited

Drewes, J., Wichmann, F., Gegenfurtner, K.

Experimentelle Psychologie: Beitr{\"a}ge zur 48. Tagung experimentell arbeitender Psychologen, 48, pages: 251, 2006 (poster)

[BibTex]

[BibTex]


no image
Local Alignment Kernels for Protein Homology Detection

Saigo, H.

Biologische Kybernetik, Kyoto University, Kyoto, Japan, 2006 (phdthesis)

[BibTex]

[BibTex]


no image
Machine Learning Challenges: evaluating predictive uncertainty, visual object classification and recognising textual entailment

Quinonero Candela, J., Dagan, I., Magnini, B., Lauria, F.

Proceedings of the First Pascal Machine Learning Challenges Workshop on Machine Learning Challenges, Evaluating Predictive Uncertainty, Visual Object Classification and Recognizing Textual Entailment (MLCW 2005), pages: 462, Lecture Notes in Computer Science, Springer, Heidelberg, Germany, First Pascal Machine Learning Challenges Workshop (MLCW), 2006 (proceedings)

Abstract
This book constitutes the thoroughly refereed post-proceedings of the First PASCAL (pattern analysis, statistical modelling and computational learning) Machine Learning Challenges Workshop, MLCW 2005, held in Southampton, UK in April 2005. The 25 revised full papers presented were carefully selected during two rounds of reviewing and improvement from about 50 submissions. The papers reflect the concepts of three challenges dealt with in the workshop: finding an assessment base on the uncertainty of predictions using classical statistics, Bayesian inference, and statistical learning theory; the second challenge was to recognize objects from a number of visual object classes in realistic scenes; the third challenge of recognizing textual entailment addresses semantic analysis of language to form a generic framework for applied semantic inference in text understanding.

Web DOI [BibTex]

Web DOI [BibTex]


no image
Texture and haptic cues in slant discrimination: combination is sensitive to reliability but not statistically optimal

Rosas, P., Wagemans, J., Ernst, M., Wichmann, F.

Beitr{\"a}ge zur 48. Tagung experimentell arbeitender Psychologen (TeaP 2006), 48, pages: 80, 2006 (poster)

[BibTex]

[BibTex]


no image
Symbol Recognition with Kernel Density Matching

Zhang, W., Wenyin, L., Zhang, K.

IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(12):2020-2024, 2006 (article)

Abstract
We propose a novel approach to similarity assessment for graphic symbols. Symbols are represented as 2D kernel densities and their similarity is measured by the Kullback-Leibler divergence. Symbol orientation is found by gradient-based angle searching or independent component analysis. Experimental results show the outstanding performance of this approach in various situations.

Web [BibTex]

Web [BibTex]


no image
Combining a Filter Method with SVMs

Lal, T., Chapelle, O., Schölkopf, B.

In Feature Extraction: Foundations and Applications, Studies in Fuzziness and Soft Computing, Vol. 207, pages: 439-446, Studies in Fuzziness and Soft Computing ; 207, (Editors: I Guyon and M Nikravesh and S Gunn and LA Zadeh), Springer, Berlin, Germany, 2006 (inbook)

Abstract
Our goal for the competition (feature selection competition NIPS 2003) was to evaluate the usefulness of simple machine learning techniques. We decided to use the correlation criteria as a feature selection method and Support Vector Machines for the classification part. Here we explain how we chose the regularization parameter C of the SVM, how we determined the kernel parameter and how we estimated the number of features used for each data set. All analyzes were carried out on the training sets of the competition data. We choose the data set Arcene as an example to explain the approach step by step. In our view the point of this competition was the construction of a well performing classifier rather than the systematic analysis of a specific approach. This is why our search for the best classifier was only guided by the described methods and that we deviated from the road map at several occasions. All calculations were done with the software Spider [2004].

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Apparatus for Inspecting Flat Panel Display and Method Thereof

Yang, JW., Kim, KI., Son, HI.

Max-Planck-Gesellschaft, Biologische Kybernetik, 2006 (patent)

[BibTex]

[BibTex]


no image
An adaptive method for subband decomposition ICA

Zhang, K., Chan, L.

Neural Computation, 18(1):191-223, 2006 (article)

Abstract
Subband decomposition ICA (SDICA), an extension of ICA, assumes that each source is represented as the sum of some independent subcomponents and dependent subcomponents, which have different frequency bands. In this article, we first investigate the feasibility of separating the SDICA mixture in an adaptive manner. Second, we develop an adaptive method for SDICA, namely band-selective ICA (BS-ICA), which finds the mixing matrix and the estimate of the source independent subcomponents. This method is based on the minimization of the mutual information between outputs. Some practical issues are discussed. For better applicability, a scheme to avoid the high-dimensional score function difference is given. Third, we investigate one form of the overcomplete ICA problems with sources having specific frequency characteristics, which BS-ICA can also be used to solve. Experimental results illustrate the success of the proposed method for solving both SDICA and the over-complete ICA problems.

Web DOI [BibTex]

Web DOI [BibTex]


no image
Embedded methods

Lal, T., Chapelle, O., Weston, J., Elisseeff, A.

In Feature Extraction: Foundations and Applications, pages: 137-165, Studies in Fuzziness and Soft Computing ; 207, (Editors: Guyon, I. , S. Gunn, M. Nikravesh, L. A. Zadeh), Springer, Berlin, Germany, 2006 (inbook)

Abstract
Embedded methods are a relatively new approach to feature selection. Unlike filter methods, which do not incorporate learning, and wrapper approaches, which can be used with arbitrary classifiers, in embedded methods the features selection part can not be separated from the learning part. Existing embedded methods are reviewed based on a unifying mathematical framework.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Ähnlichkeitsmasse in Modellen zur Kategorienbildung

Jäkel, F., Wichmann, F.

Experimentelle Psychologie: Beitr{\"a}ge zur 48. Tagung experimentell arbeitender Psychologen, 48, pages: 223, 2006 (poster)

[BibTex]

[BibTex]


no image
The pedestal effect is caused by off-frequency looking, not nonlinear transduction or contrast gain-control

Wichmann, F., Henning, B.

Experimentelle Psychologie: Beitr{\"a}ge zur 48. Tagung experimentell arbeitender Psychologen, 48, pages: 205, 2006 (poster)

[BibTex]

[BibTex]


no image
How to choose the covariance for Gaussian process regression independently of the basis

Franz, M., Gehler, P.

In Proceedings of the Workshop Gaussian Processes in Practice, Workshop Gaussian Processes in Practice (GPIP), 2006 (inproceedings)

pdf [BibTex]

pdf [BibTex]


no image
Learning operational space control

Peters, J., Schaal, S.

In Robotics: Science and Systems II (RSS 2006), pages: 255-262, (Editors: Gaurav S. Sukhatme and Stefan Schaal and Wolfram Burgard and Dieter Fox), Cambridge, MA: MIT Press, RSS , 2006, clmc (inproceedings)

Abstract
While operational space control is of essential importance for robotics and well-understood from an analytical point of view, it can be prohibitively hard to achieve accurate control in face of modeling errors, which are inevitable in complex robots, e.g., humanoid robots. In such cases, learning control methods can offer an interesting alternative to analytical control algorithms. However, the resulting learning problem is ill-defined as it requires to learn an inverse mapping of a usually redundant system, which is well known to suffer from the property of non-covexity of the solution space, i.e., the learning system could generate motor commands that try to steer the robot into physically impossible configurations. A first important insight for this paper is that, nevertheless, a physically correct solution to the inverse problem does exits when learning of the inverse map is performed in a suitable piecewise linear way. The second crucial component for our work is based on a recent insight that many operational space controllers can be understood in terms of a constraint optimal control problem. The cost function associated with this optimal control problem allows us to formulate a learning algorithm that automatically synthesizes a globally consistent desired resolution of redundancy while learning the operational space controller. From the view of machine learning, the learning problem corresponds to a reinforcement learning problem that maximizes an immediate reward and that employs an expectation-maximization policy search algorithm. Evaluations on a three degrees of freedom robot arm illustrate the feasability of our suggested approach.

link (url) [BibTex]

link (url) [BibTex]


no image
Reinforcement Learning for Parameterized Motor Primitives

Peters, J., Schaal, S.

In Proceedings of the 2006 International Joint Conference on Neural Networks, pages: 73-80, IJCNN, 2006, clmc (inproceedings)

Abstract
One of the major challenges in both action generation for robotics and in the understanding of human motor control is to learn the "building blocks of movement generation", called motor primitives. Motor primitives, as used in this paper, are parameterized control policies such as splines or nonlinear differential equations with desired attractor properties. While a lot of progress has been made in teaching parameterized motor primitives using supervised or imitation learning, the self-improvement by interaction of the system with the environment remains a challenging problem. In this paper, we evaluate different reinforcement learning approaches for improving the performance of parameterized motor primitives. For pursuing this goal, we highlight the difficulties with current reinforcement learning methods, and outline both established and novel algorithms for the gradient-based improvement of parameterized policies. We compare these algorithms in the context of motor primitive learning, and show that our most modern algorithm, the Episodic Natural Actor-Critic outperforms previous algorithms by at least an order of magnitude. We demonstrate the efficiency of this reinforcement learning method in the application of learning to hit a baseball with an anthropomorphic robot arm.

link (url) DOI [BibTex]

link (url) DOI [BibTex]


The rate adapting poisson model for information retrieval and object recognition
The rate adapting poisson model for information retrieval and object recognition

Gehler, P. V., Holub, A. D., Welling, M.

In Proceedings of the 23rd international conference on Machine learning, pages: 337-344, ICML ’06, ACM, New York, NY, USA, 2006 (inproceedings)

project page pdf DOI [BibTex]

project page pdf DOI [BibTex]


no image
Policy gradient methods for robotics

Peters, J., Schaal, S.

In Proceedings of the IEEE International Conference on Intelligent Robotics Systems, pages: 2219-2225, IROS, 2006, clmc (inproceedings)

Abstract
The aquisition and improvement of motor skills and control policies for robotics from trial and error is of essential importance if robots should ever leave precisely pre-structured environments. However, to date only few existing reinforcement learning methods have been scaled into the domains of highdimensional robots such as manipulator, legged or humanoid robots. Policy gradient methods remain one of the few exceptions and have found a variety of applications. Nevertheless, the application of such methods is not without peril if done in an uninformed manner. In this paper, we give an overview on learning with policy gradient methods for robotics with a strong focus on recent advances in the field. We outline previous applications to robotics and show how the most recently developed methods can significantly improve learning performance. Finally, we evaluate our most promising algorithm in the application of hitting a baseball with an anthropomorphic arm.

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Implicit Wiener Series, Part II: Regularised estimation
Implicit Wiener Series, Part II: Regularised estimation

Gehler, P., Franz, M.

(148), Max Planck Institute, 2006 (techreport)

pdf [BibTex]

2005


no image
Spectral clustering and transductive inference for graph data

Zhou, D.

NIPS Workshop on Kernel Methods and Structured Domains, December 2005 (talk)

PDF Web [BibTex]

2005

PDF Web [BibTex]


no image
Kernel Methods for Measuring Independence

Gretton, A., Herbrich, R., Smola, A., Bousquet, O., Schölkopf, B.

Journal of Machine Learning Research, 6, pages: 2075-2129, December 2005 (article)

Abstract
We introduce two new functionals, the constrained covariance and the kernel mutual information, to measure the degree of independence of random variables. These quantities are both based on the covariance between functions of the random variables in reproducing kernel Hilbert spaces (RKHSs). We prove that when the RKHSs are universal, both functionals are zero if and only if the random variables are pairwise independent. We also show that the kernel mutual information is an upper bound near independence on the Parzen window estimate of the mutual information. Analogous results apply for two correlation-based dependence functionals introduced earlier: we show the kernel canonical correlation and the kernel generalised variance to be independence measures for universal kernels, and prove the latter to be an upper bound on the mutual information near independence. The performance of the kernel dependence functionals in measuring independence is verified in the context of independent component analysis.

PDF PostScript PDF [BibTex]

PDF PostScript PDF [BibTex]


no image
Kernel ICA for Large Scale Problems

Jegelka, S., Gretton, A., Achlioptas, D.

In pages: -, NIPS Workshop on Large Scale Kernel Machines, December 2005 (inproceedings)

Web [BibTex]

Web [BibTex]


no image
Infinite dimensional exponential families by reproducing kernel Hilbert spaces

Fukumizu, K.

In IGAIA 2005, pages: 324-333, 2nd International Symposium on Information Geometry and its Applications, December 2005 (inproceedings)

Abstract
The purpose of this paper is to propose a method of constructing exponential families of Hilbert manifold, on which estimation theory can be built. Although there have been works on infinite dimensional exponential families of Banach manifolds (Pistone and Sempi, 1995; Gibilisco and Pistone, 1998; Pistone and Rogantin, 1999), they are not appropriate to discuss statistical estimation with finite number of samples; the likelihood function with finite samples is not continuous on the manifold. In this paper we use a reproducing kernel Hilbert space as a functional space for constructing an exponential manifold. A reproducing kernel Hilbert space is dened as a Hilbert space of functions such that evaluation of a function at an arbitrary point is a continuous functional on the Hilbert space. Since we can discuss the value of a function with this space, it is very natural to use a manifold associated with a reproducing kernel Hilbert space as a basis of estimation theory. We focus on the maximum likelihood estimation (MLE) with the exponential manifold of a reproducing kernel Hilbert space. As in many non-parametric estimation methods, straightforward extension of MLE to an infinite dimensional exponential manifold suffers the problem of ill-posedness caused by the fact that the estimator should be chosen from the infinite dimensional space with only finite number of constraints given by the data. To solve this problem, a pseudo-maximum likelihood method is proposed by restricting the infinite dimensional manifold to a series of finite dimensional submanifolds, which enlarge as the number of samples increases. Some asymptotic results in the limit of infinite samples are shown, including the consistency of the pseudo-MLE.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Some thoughts about Gaussian Processes

Chapelle, O.

NIPS Workshop on Open Problems in Gaussian Processes for Machine Learning, December 2005 (talk)

PDF Web [BibTex]

PDF Web [BibTex]


no image
A Unifying View of Sparse Approximate Gaussian Process Regression

Quinonero Candela, J., Rasmussen, C.

Journal of Machine Learning Research, 6, pages: 1935-1959, December 2005 (article)

Abstract
We provide a new unifying view, including all existing proper probabilistic sparse approximations for Gaussian process regression. Our approach relies on expressing the effective prior which the methods are using. This allows new insights to be gained, and highlights the relationship between existing methods. It also allows for a clear theoretically justified ranking of the closeness of the known approximations to the corresponding full GPs. Finally we point directly to designs of new better sparse approximations, combining the best of the existing strategies, within attractive computational constraints.

PDF [BibTex]

PDF [BibTex]


no image
Method and device for detection of splice form and alternative splice forms in DNA or RNA sequences

Rätsch, G., Sonnenburg, S., Müller, K., Schölkopf, B.

European Patent Application, International No PCT/EP2005/005783, December 2005 (patent)

[BibTex]

[BibTex]


no image
Popper, Falsification and the VC-dimension

Corfield, D., Schölkopf, B., Vapnik, V.

(145), Max Planck Institute for Biological Cybernetics, November 2005 (techreport)

PDF [BibTex]

PDF [BibTex]


no image
Shortest-path kernels on graphs

Borgwardt, KM., Kriegel, H-P.

In pages: 74-81, IEEE Computer Society, Los Alamitos, CA, USA, Fifth International Conference on Data Mining (ICDM), November 2005 (inproceedings)

Abstract
Data mining algorithms are facing the challenge to deal with an increasing number of complex objects. For graph data, a whole toolbox of data mining algorithms becomes available by defining a kernel function on instances of graphs. Graph kernels based on walks, subtrees and cycles in graphs have been proposed so far. As a general problem, these kernels are either computationally expensive or limited in their expressiveness. We try to overcome this problem by defining expressive graph kernels which are based on paths. As the computation of all paths and longest paths in a graph is NP-hard, we propose graph kernels based on shortest paths. These kernels are computable in polynomial time, retain expressivity and are still positive definite. In experiments on classification of graph models of proteins, our shortest-path kernels show significantly higher classification accuracy than walk-based kernels.

Web DOI [BibTex]

Web DOI [BibTex]


no image
Extension to Kernel Dependency Estimation with Applications to Robotics

BakIr, G.

Biologische Kybernetik, Technische Universität Berlin, Berlin, November 2005 (phdthesis)

Abstract
Kernel Dependency Estimation(KDE) is a novel technique which was designed to learn mappings between sets without making assumptions on the type of the involved input and output data. It learns the mapping in two stages. In a first step, it tries to estimate coordinates of a feature space representation of elements of the set by solving a high dimensional multivariate regression problem in feature space. Following this, it tries to reconstruct the original representation given the estimated coordinates. This thesis introduces various algorithmic extensions to both stages in KDE. One of the contributions of this thesis is to propose a novel linear regression algorithm that explores low-dimensional subspaces during learning. Furthermore various existing strategies for reconstructing patterns from feature maps involved in KDE are discussed and novel pre-image techniques are introduced. In particular, pre-image techniques for data-types that are of discrete nature such as graphs and strings are investigated. KDE is then explored in the context of robot pose imitation where the input is a an image with a human operator and the output is the robot articulated variables. Thus, using KDE, robot pose imitation is formulated as a regression problem.

PDF PDF [BibTex]

PDF PDF [BibTex]


no image
Kernel methods for dependence testing in LFP-MUA

Gretton, A., Belitski, A., Murayama, Y., Schölkopf, B., Logothetis, N.

35(689.17), 35th Annual Meeting of the Society for Neuroscience (Neuroscience), November 2005 (poster)

Abstract
A fundamental problem in neuroscience is determining whether or not particular neural signals are dependent. The correlation is the most straightforward basis for such tests, but considerable work also focuses on the mutual information (MI), which is capable of revealing dependence of higher orders that the correlation cannot detect. That said, there are other measures of dependence that share with the MI an ability to detect dependence of any order, but which can be easier to compute in practice. We focus in particular on tests based on the functional covariance, which derive from work originally accomplished in 1959 by Renyi. Conceptually, our dependence tests work by computing the covariance between (infinite dimensional) vectors of nonlinear mappings of the observations being tested, and then determining whether this covariance is zero - we call this measure the constrained covariance (COCO). When these vectors are members of universal reproducing kernel Hilbert spaces, we can prove this covariance to be zero only when the variables being tested are independent. The greatest advantage of these tests, compared with the mutual information, is their simplicity – when comparing two signals, we need only take the largest eigenvalue (or the trace) of a product of two matrices of nonlinearities, where these matrices are generally much smaller than the number of observations (and are very simple to construct). We compare the mutual information, the COCO, and the correlation in the context of finding changes in dependence between the LFP and MUA signals in the primary visual cortex of the anaesthetized macaque, during the presentation of dynamic natural stimuli. We demonstrate that the MI and COCO reveal dependence which is not detected by the correlation alone (which we prove by artificially removing all correlation between the signals, and then testing their dependence with COCO and the MI); and that COCO and the MI give results consistent with each other on our data.

Web [BibTex]

Web [BibTex]


no image
Training Support Vector Machines with Multiple Equality Constraints

Kienzle, W., Schölkopf, B.

In Proceedings of the 16th European Conference on Machine Learning, Lecture Notes in Computer Science, Vol. 3720, pages: 182-193, (Editors: JG Carbonell and J Siekmann), Springer, Berlin, Germany, ECML, November 2005 (inproceedings)

Abstract
In this paper we present a primal-dual decomposition algorithm for support vector machine training. As with existing methods that use very small working sets (such as Sequential Minimal Optimization (SMO), Successive Over-Relaxation (SOR) or the Kernel Adatron (KA)), our method scales well, is straightforward to implement, and does not require an external QP solver. Unlike SMO, SOR and KA, the method is applicable to a large number of SVM formulations regardless of the number of equality constraints involved. The effectiveness of our algorithm is demonstrated on a more difficult SVM variant in this respect, namely semi-parametric support vector regression.

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Geometrical aspects of statistical learning theory

Hein, M.

Biologische Kybernetik, Darmstadt, Darmstadt, November 2005 (phdthesis)

PDF [BibTex]

PDF [BibTex]


no image
Measuring Statistical Dependence with Hilbert-Schmidt Norms

Gretton, A., Bousquet, O., Smola, A., Schoelkopf, B.

In Algorithmic Learning Theory, Lecture Notes in Computer Science, Vol. 3734, pages: 63-78, (Editors: S Jain and H-U Simon and E Tomita), Springer, Berlin, Germany, 16th International Conference ALT, October 2005 (inproceedings)

Abstract
We propose an independence criterion based on the eigenspectrum of covariance operators in reproducing kernel Hilbert spaces (RKHSs), consisting of an empirical estimate of the Hilbert-Schmidt norm of the cross-covariance operator (we term this a Hilbert-Schmidt Independence Criterion, or HSIC). This approach has several advantages, compared with previous kernel-based independence criteria. First, the empirical estimate is simpler than any other kernel dependence test, and requires no user-defined regularisation. Second, there is a clearly defined population quantity which the empirical estimate approaches in the large sample limit, with exponential convergence guaranteed between the two: this ensures that independence tests based on {methodname} do not suffer from slow learning rates. Finally, we show in the context of independent component analysis (ICA) that the performance of HSIC is competitive with that of previously published kernel-based criteria, and of other recently published ICA methods.

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Maximal Margin Classification for Metric Spaces

Hein, M., Bousquet, O., Schölkopf, B.

Journal of Computer and System Sciences, 71(3):333-359, October 2005 (article)

Abstract
In order to apply the maximum margin method in arbitrary metric spaces, we suggest to embed the metric space into a Banach or Hilbert space and to perform linear classification in this space. We propose several embeddings and recall that an isometric embedding in a Banach space is always possible while an isometric embedding in a Hilbert space is only possible for certain metric spaces. As a result, we obtain a general maximum margin classification algorithm for arbitrary metric spaces (whose solution is approximated by an algorithm of Graepel. Interestingly enough, the embedding approach, when applied to a metric which can be embedded into a Hilbert space, yields the SVM algorithm, which emphasizes the fact that its solution depends on the metric and not on the kernel. Furthermore we give upper bounds of the capacity of the function classes corresponding to both embeddings in terms of Rademacher averages. Finally we compare the capacities of these function classes directly.

PDF PDF DOI [BibTex]

PDF PDF DOI [BibTex]


no image
An Analysis of the Anti-Learning Phenomenon for the Class Symmetric Polyhedron

Kowalczyk, A., Chapelle, O.

In Algorithmic Learning Theory: 16th International Conference, pages: 78-92, Algorithmic Learning Theory, October 2005 (inproceedings)

Abstract
This paper deals with an unusual phenomenon where most machine learning algorithms yield good performance on the training set but systematically worse than random performance on the test set. This has been observed so far for some natural data sets and demonstrated for some synthetic data sets when the classification rule is learned from a small set of training samples drawn from some high dimensional space. The initial analysis presented in this paper shows that anti-learning is a property of data sets and is quite distinct from overfitting of a training data. Moreover, the analysis leads to a specification of some machine learning procedures which can overcome anti-learning and generate ma- chines able to classify training and test data consistently.

PDF [BibTex]

PDF [BibTex]


no image
Selective integration of multiple biological data for supervised network inference

Kato, T., Tsuda, K., Asai, K.

Bioinformatics, 21(10):2488 , October 2005 (article)

PDF [BibTex]

PDF [BibTex]


no image
Assessing Approximate Inference for Binary Gaussian Process Classification

Kuss, M., Rasmussen, C.

Journal of Machine Learning Research, 6, pages: 1679 , October 2005 (article)

Abstract
Gaussian process priors can be used to define flexible, probabilistic classification models. Unfortunately exact Bayesian inference is analytically intractable and various approximation techniques have been proposed. In this work we review and compare Laplace‘s method and Expectation Propagation for approximate Bayesian inference in the binary Gaussian process classification model. We present a comprehensive comparison of the approximations, their predictive performance and marginal likelihood estimates to results obtained by MCMC sampling. We explain theoretically and corroborate empirically the advantages of Expectation Propagation compared to Laplace‘s method.

PDF PDF [BibTex]

PDF PDF [BibTex]


no image
Implicit Surfaces For Modelling Human Heads

Steinke, F.

Biologische Kybernetik, Eberhard-Karls-Universität, Tübingen, September 2005 (diplomathesis)

[BibTex]

[BibTex]


no image
A new methodology for robot controller design

Peters, J., Peters, J., Mistry, M., Udwadia, F.

In Proceedings of the 5th ASME International Design Engineering Technical Conferences and Computers and Information in Engineering Conference (IDETC‘05), 5, pages: 1067-1076 , ASME, New York, NY, USA, 5th ASME International Design Engineering Technical Conferences and Computers and Information in Engineering Conference (IDETC-MSNDC), September 2005 (inproceedings)

Abstract
Gauss' principle of least constraint and its generalizations have provided a useful insights for the development of tracking controllers for mechanical systems [1]. Using this concept, we present a novel methodology for the design of a specific class of robot controllers. With our new framework, we demonstrate that well-known and also several novel nonlinear robot control laws can be derived from this generic framework, and show experimental verifications on a Sarcos Master Arm robot for some of these controllers. We believe that the suggested approach unifies and simplifies the design of optimal nonlinear control laws for robots obeying rigid body dynamics equations, both with or without external constraints, holonomic or nonholonomic constraints, with over-actuation or underactuation, as well as open-chain and closed-chain kinematics.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Clustering on the Unit Hypersphere using von Mises-Fisher Distributions

Banerjee, A., Dhillon, I., Ghosh, J., Sra, S.

Journal of Machine Learning Research, 6, pages: 1345-1382, September 2005 (article)

Abstract
Several large scale data mining applications, such as text categorization and gene expression analysis, involve high-dimensional data that is also inherently directional in nature. Often such data is L2 normalized so that it lies on the surface of a unit hypersphere. Popular models such as (mixtures of) multi-variate Gaussians are inadequate for characterizing such data. This paper proposes a generative mixture-model approach to clustering directional data based on the von Mises-Fisher (vMF) distribution, which arises naturally for data distributed on the unit hypersphere. In particular, we derive and analyze two variants of the Expectation Maximization (EM) framework for estimating the mean and concentration parameters of this mixture. Numerical estimation of the concentration parameters is non-trivial in high dimensions since it involves functional inversion of ratios of Bessel functions. We also formulate two clustering algorithms corresponding to the variants of EM that we derive. Our approach provides a theoretical basis for the use of cosine similarity that has been widely employed by the information retrieval community, and obtains the spherical kmeans algorithm (kmeans with cosine similarity) as a special case of both variants. Empirical results on clustering of high-dimensional text and gene-expression data based on a mixture of vMF distributions show that the ability to estimate the concentration parameter for each vMF component, which is not present in existing approaches, yields superior results, especially for difficult clustering tasks in high-dimensional spaces.

PDF [BibTex]

PDF [BibTex]


no image
Support Vector Machines for 3D Shape Processing

Steinke, F., Schölkopf, B., Blanz, V.

Computer Graphics Forum, 24(3, EUROGRAPHICS 2005):285-294, September 2005 (article)

Abstract
We propose statistical learning methods for approximating implicit surfaces and computing dense 3D deformation fields. Our approach is based on Support Vector (SV) Machines, which are state of the art in machine learning. It is straightforward to implement and computationally competitive; its parameters can be automatically set using standard machine learning methods. The surface approximation is based on a modified Support Vector regression. We present applications to 3D head reconstruction, including automatic removal of outliers and hole filling. In a second step, we build on our SV representation to compute dense 3D deformation fields between two objects. The fields are computed using a generalized SVMachine enforcing correspondence between the previously learned implicit SV object representations, as well as correspondences between feature points if such points are available. We apply the method to the morphing of 3D heads and other objects.

PDF [BibTex]

PDF [BibTex]


no image
Rapid animal detection in natural scenes: Critical features are local

Wichmann, F., Rosas, P., Gegenfurtner, K.

Journal of Vision, 5(8):376, Fifth Annual Meeting of the Vision Sciences Society (VSS), September 2005 (poster)

Abstract
Thorpe et al (Nature 381, 1996) first showed how rapidly human observers are able to classify natural images as to whether they contain an animal or not. Whilst the basic result has been replicated using different response paradigms (yes-no versus forced-choice), modalities (eye movements versus button presses) as well as while measuring neurophysiological correlates (ERPs), it is still unclear which image features support this rapid categorisation. Recently Torralba and Oliva (Network: Computation in Neural Systems, 14, 2003) suggested that simple global image statistics can be used to predict seemingly complex decisions about the absence and/or presence of objects in natural scences. They show that the information contained in a small number (N=16) of spectral principal components (SPC)—principal component analysis (PCA) applied to the normalised power spectra of the images—is sufficient to achieve approximately 80% correct animal detection in natural scenes. Our goal was to test whether human observers make use of the power spectrum when rapidly classifying natural scenes. We measured our subjects' ability to detect animals in natural scenes as a function of presentation time (13 to 167 msec); images were immediately followed by a noise mask. In one condition we used the original images, in the other images whose power spectra were equalised (each power spectrum was set to the mean power spectrum over our ensemble of 1476 images). Thresholds for 75% correct animal detection were in the region of 20–30 msec for all observers, independent of the power spectrum of the images: this result makes it very unlikely that human observers make use of the global power spectrum. Taken together with the results of Gegenfurtner, Braun & Wichmann (Journal of Vision [abstract], 2003), showing the robustness of animal detection to global phase noise, we conclude that humans use local features, like edges and contours, in rapid animal detection.

Web DOI [BibTex]

Web DOI [BibTex]


no image
Correlation of EEG spectral entropy with regional cerebral blood flow during sevoflurane and propofol anaesthesia

Maksimow, A., Kaisti, K., Aalto, S., Mäenpää, M., Jääskeläinen, S., Hinkka, S., Martens, SMM., Särkelä, M., Viertiö-Oja, H., Scheinin, H.

Anaesthesia, 60(9):862-869, September 2005 (article)

Abstract
ENTROPY index monitoring, based on spectral entropy of the electroencephalogram, is a promising new method to measure the depth of anaesthesia. We examined the association between spectral entropy and regional cerebral blood flow in healthy subjects anaesthetised with 2%, 3% and 4% end-expiratory concentrations of sevoflurane and 7.6, 12.5 and 19.0 microg.ml(-1) plasma drug concentrations of propofol. Spectral entropy from the frequency band 0.8-32 Hz was calculated and cerebral blood flow assessed using positron emission tomography and [(15)O]-labelled water at baseline and at each anaesthesia level. Both drugs induced significant reductions in spectral entropy and cortical and global cerebral blood flow. Midfrontal-central spectral entropy was associated with individual frontal and whole brain blood flow values across all conditions, suggesting that this novel measure of anaesthetic depth can depict global changes in neuronal activity induced by the drugs. The cortical areas of the most significant associations were remarkably similar for both drugs.

DOI [BibTex]

DOI [BibTex]


no image
Fast Protein Classification with Multiple Networks

Tsuda, K., Shin, H., Schölkopf, B.

Bioinformatics, 21(Suppl. 2):59-65, September 2005 (article)

Abstract
Support vector machines (SVM) have been successfully used to classify proteins into functional categories. Recently, to integrate multiple data sources, a semidefinite programming (SDP) based SVM method was introduced Lanckriet et al (2004). In SDP/SVM, multiple kernel matrices corresponding to each of data sources are combined with weights obtained by solving an SDP. However, when trying to apply SDP/SVM to large problems, the computational cost can become prohibitive, since both converting the data to a kernel matrix for the SVM and solving the SDP are time and memory demanding. Another application-specific drawback arises when some of the data sources are protein networks. A common method of converting the network to a kernel matrix is the diffusion kernel method, which has time complexity of O(n^3), and produces a dense matrix of size n x n. We propose an efficient method of protein classification using multiple protein networks. Available protein networks, such as a physical interaction network or a metabolic network, can be directly incorporated. Vectorial data can also be incorporated after conversion into a network by means of neighbor point connection. Similarly to the SDP/SVM method, the combination weights are obtained by convex optimization. Due to the sparsity of network edges, the computation time is nearly linear in the number of edges of the combined network. Additionally, the combination weights provide information useful for discarding noisy or irrelevant networks. Experiments on function prediction of 3588 yeast proteins show promising results: the computation time is enormously reduced, while the accuracy is still comparable to the SDP/SVM method.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Analyzing microarray data using quantitative association rules

Georgii, E., Richter, L., Rückert, U., Kramer, S.

Bioinformatics, 21(Suppl. 2):123-129, September 2005 (article)

Abstract
Motivation: We tackle the problem of finding regularities in microarray data. Various data mining tools, such as clustering, classification, Bayesian networks and association rules, have been applied so far to gain insight into gene-expression data. Association rule mining techniques used so far work on discretizations of the data and cannot account for cumulative effects. In this paper, we investigate the use of quantitative association rules that can operate directly on numeric data and represent cumulative effects of variables. Technically speaking, this type of quantitative association rules based on half-spaces can find non-axis-parallel regularities. Results: We performed a variety of experiments testing the utility of quantitative association rules for microarray data. First of all, the results should be statistically significant and robust against fluctuations in the data. Next, the approach should be scalable in the number of variables, which is important for such high-dimensional data. Finally, the rules should make sense biologically and be sufficiently different from rules found in regular association rule mining working with discretizations. In all of these dimensions, the proposed approach performed satisfactorily. Therefore, quantitative association rules based on half-spaces should be considered as a tool for the analysis of microarray gene-expression data.

Web DOI [BibTex]

Web DOI [BibTex]


no image
EEG-Based Mental Task Classification: Linear and Nonlinear Classification of Movement Imagery

Athena Akrami, A.

In EMBS, 27th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBS), September 1-4,, Shanghai, China (Accepted), September 2005 (inproceedings) Accepted

Abstract
Abstract—Use of EEG signals as a channel of communication between men and machines represents one of the current challenges in signal theory research. The principal element of such a communication system, known as a “Brain-Computer Interface,” is the interpretation of the EEG signals related to the characteristic parameters of brain electrical activity. Our goal in this work was extracting quantitative changes in the EEG due to movement imagination. Subject‘s EEG was recorded while he performed left or right hand movement imagination. Different feature sets extracted from EEG were used as inputs into linear, Neural Network and HMM classifiers for the purpose of imagery movement mental task classification. The results indicate that applying linear classifier to 5 frequency features of asymmetry signal produced from channel C3 and C4 can provide a very high classification accuracy percentage as a simple classifier with small number of features comparing to other feature sets.

[BibTex]

[BibTex]