Header logo is ei


2007


no image
Improving the Caenorhabditis elegans Genome Annotation Using Machine Learning

Rätsch, G., Sonnenburg, S., Srinivasan, J., Witte, H., Müller, K., Sommer, R., Schölkopf, B.

PLoS Computational Biology, 3(2, e20):0313-0322, February 2007 (article)

PDF DOI [BibTex]

2007

PDF DOI [BibTex]


no image
The Independent Components of Natural Images are Perceptually Dependent

Bethge, M., Wiecki, T., Wichmann, F.

In Human Vision and Electronic Imaging XII, pages: 1-12, (Editors: Rogowitz, B. E.), SPIE, Bellingham, WA, USA, SPIE Human Vision and Electronic Imaging Conference, February 2007 (inproceedings)

Abstract
The independent components of natural images are a set of linear filters which are optimized for statistical independence. With such a set of filters images can be represented without loss of information. Intriguingly, the filter shapes are localized, oriented, and bandpass, resembling important properties of V1 simple cell receptive fields. Here we address the question of whether the independent components of natural images are also perceptually less dependent than other image components. We compared the pixel basis, the ICA basis and the discrete cosine basis by asking subjects to interactively predict missing pixels (for the pixel basis) or to predict the coefficients of ICA and DCT basis functions in patches of natural images. Like Kersten (1987) we find the pixel basis to be perceptually highly redundant but perhaps surprisingly, the ICA basis showed significantly higher perceptual dependencies than the DCT basis. This shows a dissociation between statistical and perceptual dependence measures.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Statistical Consistency of Kernel Canonical Correlation Analysis

Fukumizu, K., Bach, F., Gretton, A.

Journal of Machine Learning Research, 8, pages: 361-383, February 2007 (article)

Abstract
While kernel canonical correlation analysis (CCA) has been applied in many contexts, the convergence of finite sample estimates of the associated functions to their population counterparts has not yet been established. This paper gives a mathematical proof of the statistical convergence of kernel CCA, providing a theoretical justification for the method. The proof uses covariance operators defined on reproducing kernel Hilbert spaces, and analyzes the convergence of their empirical estimates of finite rank to their population counterparts, which can have infinite rank. The result also gives a sufficient condition for convergence on the regularization coefficient involved in kernel CCA: this should decrease as n^{-1/3}, where n is the number of data.

PDF [BibTex]

PDF [BibTex]


no image
Unsupervised learning of a steerable basis for invariant image representations

Bethge, M., Gerwinn, S., Macke, J.

In Human Vision and Electronic Imaging XII, pages: 1-12, (Editors: Rogowitz, B. E.), SPIE, Bellingham, WA, USA, SPIE Human Vision and Electronic Imaging Conference, February 2007 (inproceedings)

Abstract
There are two aspects to unsupervised learning of invariant representations of images: First, we can reduce the dimensionality of the representation by finding an optimal trade-off between temporal stability and informativeness. We show that the answer to this optimization problem is generally not unique so that there is still considerable freedom in choosing a suitable basis. Which of the many optimal representations should be selected? Here, we focus on this second aspect, and seek to find representations that are invariant under geometrical transformations occuring in sequences of natural images. We utilize ideas of steerability and Lie groups, which have been developed in the context of filter design. In particular, we show how an anti-symmetric version of canonical correlation analysis can be used to learn a full-rank image basis which is steerable with respect to rotations. We provide a geometric interpretation of this algorithm by showing that it finds the two-dimensional eigensubspaces of the avera ge bivector. For data which exhibits a variety of transformations, we develop a bivector clustering algorithm, which we use to learn a basis of generalized quadrature pairs (i.e. complex cells) from sequences of natural images.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
A Subspace Kernel for Nonlinear Feature Extraction

Wu, M., Farquhar, J.

In IJCAI-07, pages: 1125-1130, (Editors: Veloso, M. M.), AAAI Press, Menlo Park, CA, USA, International Joint Conference on Artificial Intelligence, January 2007 (inproceedings)

Abstract
Kernel based nonlinear Feature Extraction (KFE) or dimensionality reduction is a widely used pre-processing step in pattern classification and data mining tasks. Given a positive definite kernel function, it is well known that the input data are implicitly mapped to a feature space with usually very high dimensionality. The goal of KFE is to find a low dimensional subspace of this feature space, which retains most of the information needed for classification or data analysis. In this paper, we propose a subspace kernel based on which the feature extraction problem is transformed to a kernel parameter learning problem. The key observation is that when projecting data into a low dimensional subspace of the feature space, the parameters that are used for describing this subspace can be regarded as the parameters of the kernel function between the projected data. Therefore current kernel parameter learning methods can be adapted to optimize this parameterized kernel function. Experimental results are provided to validate the effectiveness of the proposed approach.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Graph kernels for disease outcome prediction from protein-protein interaction networks

Borgwardt, KM., Vishwanathan, SVN., Schraudolph, N., Kriegel, H-P.

In pages: 4-15, (Editors: Altman, R.B. A.K. Dunker, L. Hunter, T. Murray, T.E. Klein), World Scientific, Hackensack, NJ, USA, Pacific Symposium on Biocomputing (PSB), January 2007 (inproceedings)

Abstract
It is widely believed that comparing discrepancies in the protein-protein interaction (PPI) networks of individuals will become an important tool in understanding and preventing diseases. Currently PPI networks for individuals are not available, but gene expression data is becoming easier to obtain and allows us to represent individuals by a co-integrated gene expression/protein interaction network. Two major problems hamper the application of graph kernels – state-of-the-art methods for whole-graph comparison – to compare PPI networks. First, these methods do not scale to graphs of the size of a PPI network. Second, missing edges in these interaction networks are biologically relevant for detecting discrepancies, yet, these methods do not take this into account. In this article we present graph kernels for biological network comparison that are fast to compute and take into account missing interactions. We evaluate their practical performance on two datasets of co-integrated gene expression/PPI networks.

PDF [BibTex]

PDF [BibTex]


no image
Some observations on the pedestal effect

Henning, G., Wichmann, F.

Journal of Vision, 7(1:3):1-15, January 2007 (article)

Abstract
The pedestal or dipper effect is the large improvement in the detectability of a sinusoidal grating observed when it is added to a masking or pedestal grating of the same spatial frequency, orientation, and phase. We measured the pedestal effect in both broadband and notched noiseVnoise from which a 1.5-octave band centered on the signal frequency had been removed. Although the pedestal effect persists in broadband noise, it almost disappears in the notched noise. Furthermore, the pedestal effect is substantial when either high- or low-pass masking noise is used. We conclude that the pedestal effect in the absence of notched noise results principally from the use of information derived from channels with peak sensitivities at spatial frequencies different from that of the signal and the pedestal. We speculate that the spatial-frequency components of the notched noise above and below the spatial frequency of the signal and the pedestal prevent ‘‘off-frequency looking,’’ that is, prevent the use of information about changes in contrast carried in channels tuned to spatial frequencies that are very much different from that of the signal and the pedestal. Thus, the pedestal or dipper effect measured without notched noise appears not to be a characteristic of individual spatial-frequency-tuned channels.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Cue Combination and the Effect of Horizontal Disparity and Perspective on Stereoacuity

Zalevski, AM., Henning, GB., Hill, NJ.

Spatial Vision, 20(1):107-138, January 2007 (article)

Abstract
Relative depth judgments of vertical lines based on horizontal disparity deteriorate enormously when the lines form part of closed configurations (Westheimer, 1979). In studies showing this effect, perspective was not manipulated and thus produced inconsistency between horizontal disparity and perspective. We show that stereoacuity improves dramatically when perspective and horizontal disparity are made consistent. Observers appear to use unhelpful perspective cues in judging the relative depth of the vertical sides of rectangles in a way not incompatible with a form of cue weighting. However, 95% confidence intervals for the weights derived for cues usually exceed the a-priori [0-1] range.

PDF PDF DOI [BibTex]

PDF PDF DOI [BibTex]


no image
Independent Factor Reinforcement Learning for Portfolio Management

Li, J., Zhang, K., Chan, L.

In Proceedings of the 8th International Conference on Intelligent Data Engineering and Automated Learning (IDEAL 2007), pages: 1020-1031, (Editors: H Yin and P Tiño and E Corchado and W Byrne and X Yao), Springer, Berlin, Germany, 8th International Conference on Intelligent Data Engineering and Automated Learning (IDEAL), 2007 (inproceedings)

Web [BibTex]

Web [BibTex]


no image
Classificazione di immagini telerilevate satellitari per agricoltura di precisione

Arnoldi, E., Bruzzone, L., Carlin, L., Pedron, L., Persello, C.

MondoGis: Il Mondo dei Sistemi Informativi Geografici, 63, pages: 13-17, 2007 (article)

[BibTex]

[BibTex]


no image
Separating convolutive mixtures by pairwise mutual information minimization", IEEE Signal Processing Letters

Zhang, K., Chan, L.

IEEE Signal Processing Letters, 14(12):992-995, 2007 (article)

Abstract
Blind separation of convolutive mixtures by minimizing the mutual information between output sequences can avoid the side effect of temporally whitening the outputs, but it involves the score function difference, whose estimation may be problematic when the data dimension is greater than two. This greatly limits the application of this method. Fortunately, for separating convolutive mixtures, pairwise independence of outputs leads to their mutual independence. As an implementation of this idea, we propose a way to separate convolutive mixtures by enforcing pairwise independence. This approach can be applied to separate convolutive mixtures of a moderate number of sources.

Web [BibTex]


no image
Kernel-Based Nonlinear Independent Component Analysis

Zhang, K., Chan, L.

In Independent Component Analysis and Signal Separation, 7th International Conference, ICA 2007, pages: 301-308, (Editors: M E Davies and C J James and S A Abdallah and M D Plumbley), Springer, 7th International Conference on Independent Component Analysis and Signal Separation (ICA), 2007, Lecture Notes in Computer Science, Vol. 4666 (inproceedings)

Web DOI [BibTex]

Web DOI [BibTex]


no image
Towards Machine Learning of Motor Skills

Peters, J., Schaal, S., Schölkopf, B.

In Proceedings of Autonome Mobile Systeme (AMS), pages: 138-144, (Editors: K Berns and T Luksch), 2007, clmc (inproceedings)

Abstract
Autonomous robots that can adapt to novel situations has been a long standing vision of robotics, artificial intelligence, and cognitive sciences. Early approaches to this goal during the heydays of artificial intelligence research in the late 1980s, however, made it clear that an approach purely based on reasoning or human insights would not be able to model all the perceptuomotor tasks that a robot should fulfill. Instead, new hope was put in the growing wake of machine learning that promised fully adaptive control algorithms which learn both by observation and trial-and-error. However, to date, learning techniques have yet to fulfill this promise as only few methods manage to scale into the high-dimensional domains of manipulator robotics, or even the new upcoming trend of humanoid robotics, and usually scaling was only achieved in precisely pre-structured domains. In this paper, we investigate the ingredients for a general approach to motor skill learning in order to get one step closer towards human-like performance. For doing so, we study two ma jor components for such an approach, i.e., firstly, a theoretically well-founded general approach to representing the required control structures for task representation and execution and, secondly, appropriate learning algorithms which can be applied in this setting.

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Reinforcement Learning for Optimal Control of Arm Movements

Theodorou, E., Peters, J., Schaal, S.

In Abstracts of the 37st Meeting of the Society of Neuroscience., Neuroscience, 2007, clmc (inproceedings)

Abstract
Every day motor behavior consists of a plethora of challenging motor skills from discrete movements such as reaching and throwing to rhythmic movements such as walking, drumming and running. How this plethora of motor skills can be learned remains an open question. In particular, is there any unifying computa-tional framework that could model the learning process of this variety of motor behaviors and at the same time be biologically plausible? In this work we aim to give an answer to these questions by providing a computational framework that unifies the learning mechanism of both rhythmic and discrete movements under optimization criteria, i.e., in a non-supervised trial-and-error fashion. Our suggested framework is based on Reinforcement Learning, which is mostly considered as too costly to be a plausible mechanism for learning com-plex limb movement. However, recent work on reinforcement learning with pol-icy gradients combined with parameterized movement primitives allows novel and more efficient algorithms. By using the representational power of such mo-tor primitives we show how rhythmic motor behaviors such as walking, squash-ing and drumming as well as discrete behaviors like reaching and grasping can be learned with biologically plausible algorithms. Using extensive simulations and by using different reward functions we provide results that support the hy-pothesis that Reinforcement Learning could be a viable candidate for motor learning of human motor behavior when other learning methods like supervised learning are not feasible.

[BibTex]

[BibTex]


no image
Reinforcement learning by reward-weighted regression for operational space control

Peters, J., Schaal, S.

In Proceedings of the 24th Annual International Conference on Machine Learning, pages: 745-750, ICML, 2007, clmc (inproceedings)

Abstract
Many robot control problems of practical importance, including operational space control, can be reformulated as immediate reward reinforcement learning problems. However, few of the known optimization or reinforcement learning algorithms can be used in online learning control for robots, as they are either prohibitively slow, do not scale to interesting domains of complex robots, or require trying out policies generated by random search, which are infeasible for a physical system. Using a generalization of the EM-base reinforcement learning framework suggested by Dayan & Hinton, we reduce the problem of learning with immediate rewards to a reward-weighted regression problem with an adaptive, integrated reward transformation for faster convergence. The resulting algorithm is efficient, learns smoothly without dangerous jumps in solution space, and works well in applications of complex high degree-of-freedom robots.

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Policy gradient methods for machine learning

Peters, J., Theodorou, E., Schaal, S.

In Proceedings of the 14th INFORMS Conference of the Applied Probability Society, pages: 97-98, Eindhoven, Netherlands, July 9-11, 2007, 2007, clmc (inproceedings)

Abstract
We present an in-depth survey of policy gradient methods as they are used in the machine learning community for optimizing parameterized, stochastic control policies in Markovian systems with respect to the expected reward. Despite having been developed separately in the reinforcement learning literature, policy gradient methods employ likelihood ratio gradient estimators as also suggested in the stochastic simulation optimization community. It is well-known that this approach to policy gradient estimation traditionally suffers from three drawbacks, i.e., large variance, a strong dependence on baseline functions and a inefficient gradient descent. In this talk, we will present a series of recent results which tackles each of these problems. The variance of the gradient estimation can be reduced significantly through recently introduced techniques such as optimal baselines, compatible function approximations and all-action gradients. However, as even the analytically obtainable policy gradients perform unnaturally slow, it required the step from ÔvanillaÕ policy gradient methods towards natural policy gradients in order to overcome the inefficiency of the gradient descent. This development resulted into the Natural Actor-Critic architecture which can be shown to be very efficient in application to motor primitive learning for robotics.

[BibTex]

[BibTex]


no image
Policy Learning for Motor Skills

Peters, J., Schaal, S.

In Proceedings of 14th International Conference on Neural Information Processing (ICONIP), pages: 233-242, (Editors: Ishikawa, M. , K. Doya, H. Miyamoto, T. Yamakawa), 2007, clmc (inproceedings)

Abstract
Policy learning which allows autonomous robots to adapt to novel situations has been a long standing vision of robotics, artificial intelligence, and cognitive sciences. However, to date, learning techniques have yet to fulfill this promise as only few methods manage to scale into the high-dimensional domains of manipulator robotics, or even the new upcoming trend of humanoid robotics, and usually scaling was only achieved in precisely pre-structured domains. In this paper, we investigate the ingredients for a general approach policy learning with the goal of an application to motor skill refinement in order to get one step closer towards human-like performance. For doing so, we study two major components for such an approach, i.e., firstly, we study policy learning algorithms which can be applied in the general setting of motor skill learning, and, secondly, we study a theoretically well-founded general approach to representing the required control structures for task representation and execution.

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Reinforcement learning for operational space control

Peters, J., Schaal, S.

In Proceedings of the 2007 IEEE International Conference on Robotics and Automation, pages: 2111-2116, IEEE Computer Society, ICRA, 2007, clmc (inproceedings)

Abstract
While operational space control is of essential importance for robotics and well-understood from an analytical point of view, it can be prohibitively hard to achieve accurate control in face of modeling errors, which are inevitable in complex robots, e.g., humanoid robots. In such cases, learning control methods can offer an interesting alternative to analytical control algorithms. However, the resulting supervised learning problem is ill-defined as it requires to learn an inverse mapping of a usually redundant system, which is well known to suffer from the property of non-convexity of the solution space, i.e., the learning system could generate motor commands that try to steer the robot into physically impossible configurations. The important insight that many operational space control algorithms can be reformulated as optimal control problems, however, allows addressing this inverse learning problem in the framework of reinforcement learning. However, few of the known optimization or reinforcement learning algorithms can be used in online learning control for robots, as they are either prohibitively slow, do not scale to interesting domains of complex robots, or require trying out policies generated by random search, which are infeasible for a physical system. Using a generalization of the EM-based reinforcement learning framework suggested by Dayan & Hinton, we reduce the problem of learning with immediate rewards to a reward-weighted regression problem with an adaptive, integrated reward transformation for faster convergence. The resulting algorithm is efficient, learns smoothly without dangerous jumps in solution space, and works well in applications of complex high degree-of-freedom robots.

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Using reward-weighted regression for reinforcement learning of task space control

Peters, J., Schaal, S.

In Proceedings of the 2007 IEEE International Symposium on Approximate Dynamic Programming and Reinforcement Learning, pages: 262-267, Honolulu, Hawaii, April 1-5, 2007, 2007, clmc (inproceedings)

Abstract
In this paper, we evaluate different versions from the three main kinds of model-free policy gradient methods, i.e., finite difference gradients, `vanilla' policy gradients and natural policy gradients. Each of these methods is first presented in its simple form and subsequently refined and optimized. By carrying out numerous experiments on the cart pole regulator benchmark we aim to provide a useful baseline for future research on parameterized policy search algorithms. Portable C++ code is provided for both plant and algorithms; thus, the results in this paper can be reevaluated, reused and new algorithms can be inserted with ease.

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Evaluation of Policy Gradient Methods and Variants on the Cart-Pole Benchmark

Riedmiller, M., Peters, J., Schaal, S.

In Proceedings of the 2007 IEEE International Symposium on Approximate Dynamic Programming and Reinforcement Learning, pages: 254-261, ADPRL, 2007, clmc (inproceedings)

Abstract
In this paper, we evaluate different versions from the three main kinds of model-free policy gradient methods, i.e., finite difference gradients, `vanilla' policy gradients and natural policy gradients. Each of these methods is first presented in its simple form and subsequently refined and optimized. By carrying out numerous experiments on the cart pole regulator benchmark we aim to provide a useful baseline for future research on parameterized policy search algorithms. Portable C++ code is provided for both plant and algorithms; thus, the results in this paper can be reevaluated, reused and new algorithms can be inserted with ease.

PDF [BibTex]

PDF [BibTex]

2003


no image
Molecular phenotyping of human chondrocyte cell lines T/C-28a2, T/C-28a4, and C-28/I2

Finger, F., Schorle, C., Zien, A., Gebhard, P., Goldring, M., Aigner, T.

Arthritis & Rheumatism, 48(12):3395-3403, December 2003 (article)

[BibTex]

2003

[BibTex]


no image
A Study on Rainfall - Runoff Models for Improving Ensemble Streamflow Prediction: 1. Rainfallrunoff Models Using Artificial Neural Networks

Jeong, D., Kim, Y., Cho, S., Shin, H.

Journal of the Korean Society of Civil Engineers, 23(6B):521-530, December 2003 (article)

Abstract
The previous ESP (Ensemble Streamflow Prediction) studies conducted in Korea reported that the modeling error is a major source of the ESP forecast error in winter and spring (i.e. dry seasons), and thus suggested that improving the rainfall-runoff model would be critical to obtain more accurate probabilistic forecasts with ESP. This study used two types of Artificial Neural Networks (ANN), such as a Single Neural Network (SNN) and an Ensemble Neural Networks (ENN), to improve the simulation capability of the rainfall-runoff model of the ESP forecasting system for the monthly inflow to the Daecheong dam. Applied for the first time to Korean hydrology, ENN combines the outputs of member models so that it can control the generalization error better than SNN. Because the dry and the flood season in Korea shows considerably different streamflow characteristics, this study calibrated the rainfall-runoff model separately for each season. Therefore, four rainfall-runoff models were developed according to the ANN types and the seasons. This study compared the ANN models with a conceptual rainfall-runoff model called TANK and verified that the ANN models were superior to TANK. Among the ANN models, ENN was more accurate than SNN. The ANN model performance was improved when the model was calibrated separately for the dry and the flood season. The best ANN model developed in this article will be incorporated into the ESP system to increase the forecast capability of ESP for the monthly inflow to the Daecheong dam.

[BibTex]

[BibTex]


no image
Quantitative Cerebral Blood Flow Measurements in the Rat Using a Beta-Probe and H215O

Weber, B., Spaeth, N., Wyss, M., Wild, D., Burger, C., Stanley, R., Buck, A.

Journal of Cerebral Blood Flow and Metabolism, 23(12):1455-1460, December 2003 (article)

Abstract
Beta-probes are a relatively new tool for tracer kinetic studies in animals. They are highly suited to evaluate new positron emission tomography tracers or measure physiologic parameters at rest and after some kind of stimulation or intervention. In many of these experiments, the knowledge of CBF is highly important. Thus, the purpose of this study was to evaluate the method of CBF measurements using a beta-probe and H215O. CBF was measured in the barrel cortex of eight rats at baseline and after acetazolamide challenge. Trigeminal nerve stimulation was additionally performed in five animals. In each category, three injections of 250 to 300 MBq H215O were performed at 10-minute intervals. Data were analyzed using a standard one-tissue compartment model (K1 = CBF, k2 = CBF/p, where p is the partition coefficient). Values for K1 were 0.35 plusminus 0.09, 0.58 plusminus 0.16, and 0.49 plusminus 0.03 mL dot min-1 dot mL-1 at rest, after acetazolamide challenge, and during trigeminal nerve stimulation, respectively. The corresponding values for k2 were 0.55 plusminus 0.12, 0.94 plusminus 0.16, and 0.85 plusminus 0.12 min-7, and for p were 0.64 plusminus 0.05, 0.61 plusminus 0.07, and 0.59 plusminus 0.06.The standard deviation of the difference between two successive experiments, a measure for the reproducibility of the method, was 10.1%, 13.0%, and 5.7% for K1, k2, and p, respectively. In summary, beta-probes in conjunction with H215O allow the reproducible quantitative measurement of CBF, although some systematic underestimation seems to occur, probably because of partial volume effects.

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
How to Deal with Large Dataset, Class Imbalance and Binary Output in SVM based Response Model

Shin, H., Cho, S.

In Proc. of the Korean Data Mining Conference, pages: 93-107, Korean Data Mining Conference, December 2003, Best Paper Award (inproceedings)

Abstract
[Abstract]: Various machine learning methods have made a rapid transition to response modeling in search of improved performance. And support vector machine (SVM) has also been attracting much attention lately. This paper presents an SVM response model. We are specifically focusing on the how-to’s to circumvent practical obstacles, such as how to face with class imbalance problem, how to produce the scores from an SVM classifier for lift chart analysis, and how to evaluate the models on accuracy and profit. Besides coping with the intractability problem of SVM training caused by large marketing dataset, a previously proposed pattern selection algorithm is introduced. SVM training accompanies time complexity of the cube of training set size. The pattern selection algorithm picks up important training patterns before SVM response modeling. We made comparison on SVM training results between the pattern selection algorithm and random sampling. Three aspects of SVM response models were evaluated: accuracies, lift chart analysis, and computational efficiency. The SVM trained with selected patterns showed a high accuracy, a high uplift in profit and in response rate, and a high computational efficiency.

PDF [BibTex]

PDF [BibTex]


no image
Blind separation of post-nonlinear mixtures using linearizing transformations and temporal decorrelation

Ziehe, A., Kawanabe, M., Harmeling, S., Müller, K.

Journal of Machine Learning Research, 4(7-8):1319-1338, November 2003 (article)

Abstract
We propose two methods that reduce the post-nonlinear blind source separation problem (PNL-BSS) to a linear BSS problem. The first method is based on the concept of maximal correlation: we apply the alternating conditional expectation (ACE) algorithm--a powerful technique from non-parametric statistics--to approximately invert the componentwise nonlinear functions. The second method is a Gaussianizing transformation, which is motivated by the fact that linearly mixed signals before nonlinear transformation are approximately Gaussian distributed. This heuristic, but simple and efficient procedure works as good as the ACE method. Using the framework provided by ACE, convergence can be proven. The optimal transformations obtained by ACE coincide with the sought-after inverse functions of the nonlinearities. After equalizing the nonlinearities, temporal decorrelation separation (TDSEP) allows us to recover the source signals. Numerical simulations testing "ACE-TD" and "Gauss-TD" on realistic examples are performed with excellent results.

PDF PDF DOI [BibTex]

PDF PDF DOI [BibTex]


no image
Correlated stage- and subfield-associated hippocampal gene expression patterns in experimental and human temporal lobe epilepsy

Becker, A., Chen, J., Zien, A., Sochivko, D., Normann, S., Schramm, J., Elger, C., Wiestler, O., Blumcke, I.

European Journal of Neuroscience, 18(10):2792-2802, November 2003 (article)

Abstract
Epileptic activity evokes profound alterations of hippocampal organization and function. Genomic responses may reflect immediate consequences of excitatory stimulation as well as sustained molecular processes related to neuronal plasticity and structural remodeling. Using oligonucleotide microarrays with 8799 sequences, we determined subregional gene expression profiles in rats subjected to pilocarpine-induced epilepsy (U34A arrays, Affymetrix, Santa Clara, CA, USA; P < 0.05, twofold change, n = 3 per stage). Patterns of gene expression corresponded to distinct stages of epilepsy development. The highest number of differentially expressed genes (dentate gyrus, approx. 400 genes and CA1, approx. 700 genes) was observed 3 days after status epilepticus. The majority of up-regulated genes was associated with mechanisms of cellular stress and injury - 14 days after status epilepticus, numerous transcription factors and genes linked to cytoskeletal and synaptic reorganization were differentially expressed and, in the stage of chronic spontaneous seizures, distinct changes were observed in the transcription of genes involved in various neurotransmission pathways and between animals with low vs. high seizure frequency. A number of genes (n = 18) differentially expressed during the chronic epileptic stage showed corresponding expression patterns in hippocampal subfields of patients with pharmacoresistant temporal lobe epilepsy (n = 5 temporal lobe epilepsy patients; U133A microarrays, Affymetrix; covering 22284 human sequences). These data provide novel insights into the molecular mechanisms of epileptogenesis and seizure-associated cellular and structural remodeling of the hippocampus.

[BibTex]

[BibTex]


no image
Concentration Inequalities for Sub-Additive Functions Using the Entropy Method

Bousquet, O.

Stochastic Inequalities and Applications, 56, pages: 213-247, Progress in Probability, (Editors: Giné, E., C. Houdré and D. Nualart), November 2003 (article)

Abstract
We obtain exponential concentration inequalities for sub-additive functions of independent random variables under weak conditions on the increments of those functions, like the existence of exponential moments for these increments. As a consequence of these general inequalities, we obtain refinements of Talagrand's inequality for empirical processes and new bounds for randomized empirical processes. These results are obtained by further developing the entropy method introduced by Ledoux.

PostScript [BibTex]

PostScript [BibTex]


no image
Bayesian Monte Carlo

Rasmussen, CE., Ghahramani, Z.

In Advances in Neural Information Processing Systems 15, pages: 489-496, (Editors: Becker, S. , S. Thrun, K. Obermayer), MIT Press, Cambridge, MA, USA, Sixteenth Annual Conference on Neural Information Processing Systems (NIPS), October 2003 (inproceedings)

Abstract
We investigate Bayesian alternatives to classical Monte Carlo methods for evaluating integrals. Bayesian Monte Carlo (BMC) allows the incorporation of prior knowledge, such as smoothness of the integrand, into the estimation. In a simple problem we show that this outperforms any classical importance sampling method. We also attempt more challenging multidimensional integrals involved in computing marginal likelihoods of statistical models (a.k.a. partition functions and model evidences). We find that Bayesian Monte Carlo outperformed Annealed Importance Sampling, although for very high dimensional problems or problems with massive multimodality BMC may be less adequate. One advantage of the Bayesian approach to Monte Carlo is that samples can be drawn from any distribution. This allows for the possibility of active design of sample points so as to maximise information gain.

PDF Web [BibTex]

PDF Web [BibTex]


no image
On the Complexity of Learning the Kernel Matrix

Bousquet, O., Herrmann, D.

In Advances in Neural Information Processing Systems 15, pages: 399-406, (Editors: Becker, S. , S. Thrun, K. Obermayer), The MIT Press, Cambridge, MA, USA, Sixteenth Annual Conference on Neural Information Processing Systems (NIPS), October 2003 (inproceedings)

Abstract
We investigate data based procedures for selecting the kernel when learning with Support Vector Machines. We provide generalization error bounds by estimating the Rademacher complexities of the corresponding function classes. In particular we obtain a complexity bound for function classes induced by kernels with given eigenvectors, i.e., we allow to vary the spectrum and keep the eigenvectors fix. This bound is only a logarithmic factor bigger than the complexity of the function class induced by a single kernel. However, optimizing the margin over such classes leads to overfitting. We thus propose a suitable way of constraining the class. We use an efficient algorithm to solve the resulting optimization problem, present preliminary experimental results, and compare them to an alignment-based approach.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Control, Planning, Learning, and Imitation with Dynamic Movement Primitives

Schaal, S., Peters, J., Nakanishi, J., Ijspeert, A.

In IROS 2003, pages: 1-21, Workshop on Bilateral Paradigms on Humans and Humanoids, IEEE International Conference on Intelligent Robots and Systems, October 2003 (inproceedings)

PDF [BibTex]

PDF [BibTex]


no image
Discriminative Learning for Label Sequences via Boosting

Altun, Y., Hofmann, T., Johnson, M.

In Advances in Neural Information Processing Systems 15, pages: 977-984, (Editors: Becker, S. , S. Thrun, K. Obermayer ), MIT Press, Cambridge, MA, USA, Sixteenth Annual Conference on Neural Information Processing Systems (NIPS), October 2003 (inproceedings)

Abstract
This paper investigates a boosting approach to discriminative learning of label sequences based on a sequence rank loss function.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Multiple-step ahead prediction for non linear dynamic systems: A Gaussian Process treatment with propagation of the uncertainty

Girard, A., Rasmussen, CE., Quiñonero-Candela, J., Murray-Smith, R.

In Advances in Neural Information Processing Systems 15, pages: 529-536, (Editors: Becker, S. , S. Thrun, K. Obermayer), MIT Press, Cambridge, MA, USA, Sixteenth Annual Conference on Neural Information Processing Systems (NIPS), October 2003 (inproceedings)

Abstract
We consider the problem of multi-step ahead prediction in time series analysis using the non-parametric Gaussian process model. k-step ahead forecasting of a discrete-time non-linear dynamic system can be performed by doing repeated one-step ahead predictions. For a state-space model of the form y_t = f(y_{t-1},...,y_{t-L}), the prediction of y at time t + k is based on the point estimates of the previous outputs. In this paper, we show how, using an analytical Gaussian approximation, we can formally incorporate the uncertainty about intermediate regressor values, thus updating the uncertainty on the current prediction.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Cluster Kernels for Semi-Supervised Learning

Chapelle, O., Weston, J., Schölkopf, B.

In Advances in Neural Information Processing Systems 15, pages: 585-592, (Editors: S Becker and S Thrun and K Obermayer), MIT Press, Cambridge, MA, USA, 16th Annual Conference on Neural Information Processing Systems (NIPS), October 2003 (inproceedings)

Abstract
We propose a framework to incorporate unlabeled data in kernel classifier, based on the idea that two points in the same cluster are more likely to have the same label. This is achieved by modifying the eigenspectrum of the kernel matrix. Experimental results assess the validity of this approach.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Mismatch String Kernels for SVM Protein Classification

Leslie, C., Eskin, E., Weston, J., Noble, W.

In Advances in Neural Information Processing Systems 15, pages: 1417-1424, (Editors: Becker, S. , S. Thrun, K. Obermayer), MIT Press, Cambridge, MA, USA, Sixteenth Annual Conference on Neural Information Processing Systems (NIPS), October 2003 (inproceedings)

Abstract
We introduce a class of string kernels, called mismatch kernels, for use with support vector machines (SVMs) in a discriminative approach to the protein classification problem. These kernels measure sequence similarity based on shared occurrences of k-length subsequences, counted with up to m mismatches, and do not rely on any generative model for the positive training sequences. We compute the kernels efficiently using a mismatch tree data structure and report experiments on a benchmark SCOP dataset, where we show that the mismatch kernel used with an SVM classifier performs as well as the Fisher kernel, the most successful method for remote homology detection, while achieving considerable computational savings.

PDF Web [BibTex]

PDF Web [BibTex]


no image
YKL-39 (chitinase 3-like protein 2), but not YKL-40 (chitinase 3-like protein 1), is up regulated in osteoarthritic chondrocytes

Knorr, T., Obermayr, F., Bartnik, E., Zien, A., Aigner, T.

Annals of the Rheumatic Diseases, 62(10):995-998, October 2003 (article)

Abstract
OBJECTIVE: To investigate quantitatively the mRNA expression levels of YKL-40, an established marker of rheumatoid and osteoarthritic cartilage degeneration in synovial fluid and serum, and a closely related molecule YKL-39, in articular chondrocytes. METHODS: cDNA array and online quantitative polymerase chain reaction (PCR) were used to measure mRNA expression levels of YKL-39 and YKL-40 in chondrocytes in normal, early degenerative, and late stage osteoarthritic cartilage samples. RESULTS: Expression analysis showed high levels of both proteins in normal articular chondrocytes, with lower levels of YKL-39 than YKL-40. Whereas YKL-40 was significantly down regulated in late stage osteoarthritic chondrocytes, YKL-39 was significantly up regulated. In vitro both YKLs were down regulated by interleukin 1beta. CONCLUSIONS: The up regulation of YKL-39 in osteoarthritic cartilage suggests that YKL-39 may be a more accurate marker of chondrocyte activation than YKL-40, although it has yet to be established as a suitable marker in synovial fluid and serum. The decreased expression of YKL-40 by osteoarthritic chondrocytes is surprising as increased levels have been reported in rheumatoid and osteoarthritic synovial fluid, where it may derive from activated synovial cells or osteophytic tissue or by increased matrix destruction in the osteoarthritic joint. YKL-39 and YKL-40 are potentially interesting marker molecules for arthritic joint disease because they are abundantly expressed by both normal and osteoarthritic chondrocytes.

[BibTex]

[BibTex]


no image
Incremental Gaussian Processes

Quinonero Candela, J., Winther, O.

In Advances in Neural Information Processing Systems 15, pages: 1001-1008, (Editors: Becker, S. , S. Thrun, K. Obermayer), MIT Press, Cambridge, MA, USA, Sixteenth Annual Conference on Neural Information Processing Systems (NIPS), October 2003 (inproceedings)

Abstract
In this paper, we consider Tipping‘s relevance vector machine (RVM) and formalize an incremental training strategy as a variant of the expectation-maximization (EM) algorithm that we call subspace EM. Working with a subset of active basis functions, the sparsity of the RVM solution will ensure that the number of basis functions and thereby the computational complexity is kept low. We also introduce a mean field approach to the intractable classification model that is expected to give a very good approximation to exact Bayesian inference and contains the Laplace approximation as a special case. We test the algorithms on two large data sets with O(10^3-10^4) examples. The results indicate that Bayesian learning of large data sets, e.g. the MNIST database is realistic.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Kernel Dependency Estimation

Weston, J., Chapelle, O., Elisseeff, A., Schölkopf, B., Vapnik, V.

In Advances in Neural Information Processing Systems 15, pages: 873-880, (Editors: S Becker and S Thrun and K Obermayer), MIT Press, Cambridge, MA, USA, 16th Annual Conference on Neural Information Processing Systems (NIPS), October 2003 (inproceedings)

PDF Web [BibTex]

PDF Web [BibTex]


no image
Derivative observations in Gaussian Process models of dynamic systems

Solak, E., Murray-Smith, R., Leithead, WE., Leith, D., Rasmussen, CE.

In Advances in Neural Information Processing Systems 15, pages: 1033-1040, (Editors: Becker, S., S. Thrun and K. Obermayer), MIT Press, Cambridge, MA, USA, Sixteenth Annual Conference on Neural Information Processing Systems (NIPS), October 2003 (inproceedings)

Abstract
Gaussian processes provide an approach to nonparametric modelling which allows a straightforward combination of function and derivative observations in an empirical model. This is of particular importance in identification of nonlinear dynamic systems from experimental data. 1) It allows us to combine derivative information, and associated uncertainty with normal function observations into the learning and inference process. This derivative information can be in the form of priors specified by an expert or identified from perturbation data close to equilibrium. 2) It allows a seamless fusion of multiple local linear models in a consistent manner, inferring consistent models and ensuring that integrability constraints are met. 3) It improves dramatically the computational efficiency of Gaussian process models for dynamic system identification, by summarising large quantities of near-equilibrium data by a handful of linearisations, reducing the training set size - traditionally a problem for Gaussian process models.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Linear Combinations of Optic Flow Vectors for Estimating Self-Motion: a Real-World Test of a Neural Model

Franz, MO., Chahl, JS.

In Advances in Neural Information Processing Systems 15, pages: 1319-1326, (Editors: Becker, S., S. Thrun and K. Obermayer), MIT Press, Cambridge, MA, USA, Sixteenth Annual Conference on Neural Information Processing Systems (NIPS), October 2003 (inproceedings)

Abstract
The tangential neurons in the fly brain are sensitive to the typical optic flow patterns generated during self-motion. In this study, we examine whether a simplified linear model of these neurons can be used to estimate self-motion from the optic flow. We present a theory for the construction of an estimator consisting of a linear combination of optic flow vectors that incorporates prior knowledge both about the distance distribution of the environment, and about the noise and self-motion statistics of the sensor. The estimator is tested on a gantry carrying an omnidirectional vision sensor. The experiments show that the proposed approach leads to accurate and robust estimates of rotation rates, whereas translation estimates turn out to be less reliable.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Clustering with the Fisher score

Tsuda, K., Kawanabe, M., Müller, K.

In Advances in Neural Information Processing Systems 15, pages: 729-736, (Editors: Becker, S. , S. Thrun, K. Obermayer), MIT Press, Cambridge, MA, USA, Sixteenth Annual Conference on Neural Information Processing Systems (NIPS), October 2003 (inproceedings)

Abstract
Recently the Fisher score (or the Fisher kernel) is increasingly used as a feature extractor for classification problems. The Fisher score is a vector of parameter derivatives of loglikelihood of a probabilistic model. This paper gives a theoretical analysis about how class information is preserved in the space of the Fisher score, which turns out that the Fisher score consists of a few important dimensions with class information and many nuisance dimensions. When we perform clustering with the Fisher score, K-Means type methods are obviously inappropriate because they make use of all dimensions. So we will develop a novel but simple clustering algorithm specialized for the Fisher score, which can exploit important dimensions. This algorithm is successfully tested in experiments with artificial data and real data (amino acid sequences).

PDF Web [BibTex]

PDF Web [BibTex]


no image
Large Margin Methods for Label Sequence Learning

Altun, Y., Hofmann, T.

In pages: 993-996, International Speech Communication Association, Bonn, Germany, 8th European Conference on Speech Communication and Technology (EuroSpeech), September 2003 (inproceedings)

Web [BibTex]

Web [BibTex]


no image
Fast Pattern Selection Algorithm for Support Vector Classifiers: "Time Complexity Analysis"

Shin, H., Cho, S.

In Lecture Notes in Computer Science (LNCS 2690), LNCS 2690, pages: 1008-1015, Springer-Verlag, Heidelberg, The 4th International Conference on Intelligent Data Engineering (IDEAL), September 2003 (inproceedings)

Abstract
Training SVM requires large memory and long cpu time when the pattern set is large. To alleviate the computational burden in SVM training, we propose a fast preprocessing algorithm which selects only the patterns near the decision boundary. The time complexity of the proposed algorithm is much smaller than that of the naive M^2 algorithm

PDF [BibTex]

PDF [BibTex]


no image
Marginalized Kernels between Labeled Graphs

Kashima, H., Tsuda, K., Inokuchi, A.

In 20th International Conference on Machine Learning, pages: 321-328, (Editors: Faucett, T. and N. Mishra), 20th International Conference on Machine Learning, August 2003 (inproceedings)

PDF [BibTex]

PDF [BibTex]


no image
Sparse Gaussian Processes: inference, subspace identification and model selection

Csato, L., Opper, M.

In Proceedings, pages: 1-6, (Editors: Van der Hof, , Wahlberg), The Netherlands, 13th IFAC Symposium on System Identifiaction, August 2003, electronical version; Index ThA02-2 (inproceedings)

Abstract
Gaussian Process (GP) inference is a probabilistic kernel method where the GP is treated as a latent function. The inference is carried out using the Bayesian online learning and its extension to the more general iterative approach which we call TAP/EP learning. Sparsity is introduced in this context to make the TAP/EP method applicable to large datasets. We address the prohibitive scaling of the number of parameters by defining a subset of the training data that is used as the support the GP, thus the number of required parameters is independent of the training set, similar to the case of ``Support--‘‘ or ``Relevance--Vectors‘‘. An advantage of the full probabilistic treatment is that allows the computation of the marginal data likelihood or evidence, leading to hyper-parameter estimation within the GP inference. An EM algorithm to choose the hyper-parameters is proposed. The TAP/EP learning is the E-step and the M-step then updates the hyper-parameters. Due to the sparse E-step the resulting algorithm does not involve manipulation of large matrices. The presented algorithm is applicable to a wide variety of likelihood functions. We present results of applying the algorithm on classification and nonstandard regression problems for artificial and real datasets.

PDF GZIP [BibTex]

PDF GZIP [BibTex]


no image
Adaptive, Cautious, Predictive control with Gaussian Process Priors

Murray-Smith, R., Sbarbaro, D., Rasmussen, CE., Girard, A.

In Proceedings of the 13th IFAC Symposium on System Identification, pages: 1195-1200, (Editors: Van den Hof, P., B. Wahlberg and S. Weiland), Proceedings of the 13th IFAC Symposium on System Identification, August 2003 (inproceedings)

Abstract
Nonparametric Gaussian Process models, a Bayesian statistics approach, are used to implement a nonlinear adaptive control law. Predictions, including propagation of the state uncertainty are made over a k-step horizon. The expected value of a quadratic cost function is minimised, over this prediction horizon, without ignoring the variance of the model predictions. The general method and its main features are illustrated on a simulation example.

PDF [BibTex]

PDF [BibTex]


no image
Generative Model-based Clustering of Directional Data

Banerjee, A., Dhillon, I., Ghosh, J., Sra, S.

In Proc. ACK SIGKDD, pages: 00-00, KDD, August 2003 (inproceedings)

GZIP [BibTex]

GZIP [BibTex]


no image
Hidden Markov Support Vector Machines

Altun, Y., Tsochantaridis, I., Hofmann, T.

In pages: 4-11, (Editors: Fawcett, T. , N. Mishra), AAAI Press, Menlo Park, CA, USA, Twentieth International Conference on Machine Learning (ICML), August 2003 (inproceedings)

Web [BibTex]

Web [BibTex]


no image
How Many Neighbors To Consider in Pattern Pre-selection for Support Vector Classifiers?

Shin, H., Cho, S.

In Proc. of INNS-IEEE International Joint Conference on Neural Networks (IJCNN 2003), pages: 565-570, IJCNN, July 2003 (inproceedings)

Abstract
Training support vector classifiers (SVC) requires large memory and long cpu time when the pattern set is large. To alleviate the computational burden in SVC training, we previously proposed a preprocessing algorithm which selects only the patterns in the overlap region around the decision boundary, based on neighborhood properties [8], [9], [10]. The k-nearest neighbors’ class label entropy for each pattern was used to estimate the pattern’s proximity to the decision boundary. The value of parameter k is critical, yet has been determined by a rather ad-hoc fashion. We propose in this paper a systematic procedure to determine k and show its effectiveness through experiments.

PDF [BibTex]

PDF [BibTex]


no image
On the Representation, Learning and Transfer of Spatio-Temporal Movement Characteristics

Ilg, W., Bakir, GH., Mezger, J., Giese, MA.

In Humanoids Proceedings, pages: 0-0, Humanoids Proceedings, July 2003, electronical version (inproceedings)

Abstract
In this paper we present a learning-based approach for the modelling of complex movement sequences. Based on the method of Spatio-Temporal Morphable Models (STMMS. We derive a hierarchical algorithm that, in a first step, identifies automatically movement elements in movement sequences based on a coarse spatio-temporal description, and in a second step models these movement primitives by approximation through linear combinations of learned example movement trajectories. We describe the different steps of the algorithm and show how it can be applied for modelling and synthesis of complex sequences of human movements that contain movement elements with variable style. The proposed method is demonstrated on different applications of movement representation relevant for imitation learning of movement styles in humanoid robotics.

PDF [BibTex]

PDF [BibTex]