Header logo is ei


2012


no image
Existential neuroscience: a functional magnetic resonance imaging investigation of neural responses to reminders of one’s mortality

Quirin, M., Loktyushin, A., Arndt, J., Küstermann, E., Lo, Y., Kuhl, J., Eggert, L.

Social Cognitive and Affective Neuroscience, 7(2):193-198, 2012 (article)

Web DOI [BibTex]

2012

Web DOI [BibTex]


no image
Active learning for domain adaptation in the supervised classification of remote sensing images

Persello, C., Bruzzone, L.

IEEE Transactions on Geoscience and Remote Sensing, 50(11):4468-4483, 2012 (article)

DOI [BibTex]

DOI [BibTex]


no image
Reinforcement learning to adjust parametrized motor primitives to new situations

Kober, J., Wilhelm, A., Oztop, E., Peters, J.

Autonomous Robots, 33(4):361-379, 2012 (article)

Abstract
Humans manage to adapt learned movements very quickly to new situations by generalizing learned behaviors from similar situations. In contrast, robots currently often need to re-learn the complete movement. In this paper, we propose a method that learns to generalize parametrized motor plans by adapting a small set of global parameters, called meta-parameters. We employ reinforcement learning to learn the required meta-parameters to deal with the current situation, described by states. We introduce an appropriate reinforcement learning algorithm based on a kernelized version of the reward-weighted regression. To show its feasibility, we evaluate this algorithm on a toy example and compare it to several previous approaches. Subsequently, we apply the approach to three robot tasks, i.e., the generalization of throwing movements in darts, of hitting movements in table tennis, and of throwing balls where the tasks are learned on several different real physical robots, i.e., a Barrett WAM, a BioRob, the JST-ICORP/SARCOS CBi and a Kuka KR 6.

PDF PDF DOI [BibTex]


no image
On the Empirical Estimation of Integral Probability Metrics

Sriperumbudur, B., Fukumizu, K., Gretton, A., Schölkopf, B., Lanckriet, G.

Electronic Journal of Statistics, 6, pages: 1550-1599, 2012 (article)

Web DOI [BibTex]

Web DOI [BibTex]


no image
Effect of MR contrast agents on quantitative accuracy of PET in combined whole-body PET/MR imaging

Lois, C., Bezrukov, I., Schmidt, H., Schwenzer, N., Werner, M., Kupferschläger, J., Beyer, T.

European Journal of Nuclear Medicine and Molecular Imaging, 39(11):1756-1766, 2012 (article)

DOI [BibTex]

DOI [BibTex]


no image
Multitask Learning in Computational Biology

Widmer, C., Rätsch, G.

JMLR W\&CP. ICML 2011 Unsupervised and Transfer Learning Workshop, 27, pages: 207-216, 2012 (article)

Abstract
Computational Biology provides a wide range of applications for Multitask Learning (MTL) methods. As the generation of labels often is very costly in the biomedical domain, combining data from different related problems or tasks is a promising strategy to reduce label cost. In this paper, we present two problems from sequence biology, where MTL was successfully applied. For this, we use regularization-based MTL methods, with a special focus on the case of a hierarchical relationship between tasks. Furthermore, we propose strategies to refine the measure of task relatedness, which is of central importance in MTL and finally give some practical guidelines, when MTL strategies are likely to pay off.

PDF [BibTex]

PDF [BibTex]


no image
Arabidopsis defense against Botrytis cinerea: chronology and regulation deciphered by high-resolution temporal transcriptomic analysis

Windram, O., Madhou, P., McHattie, S., Hill, C., Hickman, R., Cooke, E., Jenkins, DJ., Penfold, CA., Baxter, Ll., Breeze, E., Kiddle, SJ., Rhodes, J., Atwell, S., Kliebenstein, D., Kim, Y-S., Stegle, O., Borgwardt, KM., others

The Plant Cell Online, 24(9):3530-3557, 2012, all authors: Oliver Windram,Priyadharshini Madhou,Stuart McHattie,Claire Hill,Richard Hickman,Emma Cooke,Dafyd J. Jenkins,Christopher A. Penfold,Laura Baxter,Emily Breeze,Steven J. Kiddle,Johanna Rhodes,Susanna Atwell,Daniel J. (article)

Web DOI [BibTex]

Web DOI [BibTex]


no image
Improved Linear Mixed Models for Genome-Wide Association Studies

Listgarten, J., Lippert, C., Kadie, CM., Davidson, RI., Eskin, E., Heckerman, D.

Nature Methods, 9, pages: 525–526, 2012 (article)

DOI [BibTex]

DOI [BibTex]


no image
Calibration of Boltzmann distribution priors in Bayesian data analysis

Mechelke, M., Habeck, M.

Physical Review E, 86(6):066705, 2012 (article)

DOI [BibTex]

DOI [BibTex]


no image
CSB: A Python framework for computational structural biology

Kalev, I., Mechelke, M., Kopec, K., Holder, T., Carstens, S., Habeck, M.

Bioinformatics, 28(22):2996-2997, 2012 (article)

Abstract
Summary: Computational Structural Biology Toolbox (CSB) is a cross-platform Python class library for reading, storing and analyzing biomolecular structures with rich support for statistical analyses. CSB is designed for reusability and extensibility and comes with a clean, well-documented API following good object-oriented engineering practice. Availability: Stable release packages are available for download from the Python Package Index (PyPI), as well as from the project’s web site http://csb.codeplex.com.

Web DOI [BibTex]

Web DOI [BibTex]


no image
Significant global reduction of carbon uptake by water-cycle driven extreme vegetation anomalies

Zscheischler, J., Mahecha, M., von Buttlar, J., Harmeling, S., Jung, M., Randerson, J., Reichstein, M.

Nature Geoscience, 2012 (article) In revision

[BibTex]

[BibTex]


no image
Design of a Haptic Interface for a Gastrointestinal Endoscopy Simulation

Yu, S., Woo, H. S., Son, H. I., Ahn, W., Jung, H., Lee, D. Y., Yi, S. Y.

Advanced Robotics, 26(18):2115-2143, 2012 (article)

DOI [BibTex]

DOI [BibTex]


no image
Measurement and calibration of noise bias in weak lensing galaxy shape estimation

Kacprzak, T., Zuntz, J., Rowe, B., Bridle, S., Refregier, A., Amara, A., Voigt, L., Hirsch, M.

Monthly Notices of the Royal Astronomical Society, 427(4):2711-2722, Oxford University Press, 2012 (article)

DOI [BibTex]

DOI [BibTex]


no image
Image analysis for cosmology: results from the GREAT10 Galaxy Challenge

Kitching, T. D., Balan, S. T., Bridle, S., Cantale, N., Courbin, F., Eifler, T., Gentile, M., Gill, M. S. S., Harmeling, S., Heymans, C., others,

Monthly Notices of the Royal Astronomical Society, 423(4):3163-3208, Oxford University Press, 2012 (article)

DOI [BibTex]

DOI [BibTex]


no image
First SN Discoveries from the Dark Energy Survey

Abbott, T., Abdalla, F., Achitouv, I., Ahn, E., Aldering, G., Allam, S., Alonso, D., Amara, A., Annis, J., Antonik, M., others,

The Astronomer's Telegram, 4668, pages: 1, 2012 (article)

[BibTex]

[BibTex]


no image
A sensorimotor paradigm for Bayesian model selection

Genewein, T, Braun, DA

Frontiers in Human Neuroscience, 6(291):1-16, October 2012 (article)

Abstract
Sensorimotor control is thought to rely on predictive internal models in order to cope efficiently with uncertain environments. Recently, it has been shown that humans not only learn different internal models for different tasks, but that they also extract common structure between tasks. This raises the question of how the motor system selects between different structures or models, when each model can be associated with a range of different task-specific parameters. Here we design a sensorimotor task that requires subjects to compensate visuomotor shifts in a three-dimensional virtual reality setup, where one of the dimensions can be mapped to a model variable and the other dimension to the parameter variable. By introducing probe trials that are neutral in the parameter dimension, we can directly test for model selection. We found that model selection procedures based on Bayesian statistics provided a better explanation for subjects’ choice behavior than simple non-probabilistic heuristics. Our experimental design lends itself to the general study of model selection in a sensorimotor context as it allows to separately query model and parameter variables from subjects.

DOI [BibTex]

DOI [BibTex]


no image
Risk-Sensitivity in Bayesian Sensorimotor Integration

Grau-Moya, J, Ortega, PA, Braun, DA

PLoS Computational Biology, 8(9):1-7, sep 2012 (article)

Abstract
Information processing in the nervous system during sensorimotor tasks with inherent uncertainty has been shown to be consistent with Bayesian integration. Bayes optimal decision-makers are, however, risk-neutral in the sense that they weigh all possibilities based on prior expectation and sensory evidence when they choose the action with highest expected value. In contrast, risk-sensitive decision-makers are sensitive to model uncertainty and bias their decision-making processes when they do inference over unobserved variables. In particular, they allow deviations from their probabilistic model in cases where this model makes imprecise predictions. Here we test for risk-sensitivity in a sensorimotor integration task where subjects exhibit Bayesian information integration when they infer the position of a target from noisy sensory feedback. When introducing a cost associated with subjects' response, we found that subjects exhibited a characteristic bias towards low cost responses when their uncertainty was high. This result is in accordance with risk-sensitive decision-making processes that allow for deviations from Bayes optimal decision-making in the face of uncertainty. Our results suggest that both Bayesian integration and risk-sensitivity are important factors to understand sensorimotor integration in a quantitative fashion.

DOI [BibTex]

DOI [BibTex]

2003


no image
Support Vector Channel Selection in BCI

Lal, T., Schröder, M., Hinterberger, T., Weston, J., Bogdan, M., Birbaumer, N., Schölkopf, B.

(120), Max Planck Institute for Biological Cybernetics, Tuebingen, Germany, December 2003 (techreport)

Abstract
Designing a Brain Computer Interface (BCI) system one can choose from a variety of features that may be useful for classifying brain activity during a mental task. For the special case of classifying EEG signals we propose the usage of the state of the art feature selection algorithms Recursive Feature Elimination [3] and Zero-Norm Optimization [13] which are based on the training of Support Vector Machines (SVM) [11]. These algorithms can provide more accurate solutions than standard filter methods for feature selection [14]. We adapt the methods for the purpose of selecting EEG channels. For a motor imagery paradigm we show that the number of used channels can be reduced significantly without increasing the classification error. The resulting best channels agree well with the expected underlying cortical activity patterns during the mental tasks. Furthermore we show how time dependent task specific information can be visualized.

PDF Web [BibTex]

2003

PDF Web [BibTex]


no image
Molecular phenotyping of human chondrocyte cell lines T/C-28a2, T/C-28a4, and C-28/I2

Finger, F., Schorle, C., Zien, A., Gebhard, P., Goldring, M., Aigner, T.

Arthritis & Rheumatism, 48(12):3395-3403, December 2003 (article)

[BibTex]

[BibTex]


no image
A Study on Rainfall - Runoff Models for Improving Ensemble Streamflow Prediction: 1. Rainfallrunoff Models Using Artificial Neural Networks

Jeong, D., Kim, Y., Cho, S., Shin, H.

Journal of the Korean Society of Civil Engineers, 23(6B):521-530, December 2003 (article)

Abstract
The previous ESP (Ensemble Streamflow Prediction) studies conducted in Korea reported that the modeling error is a major source of the ESP forecast error in winter and spring (i.e. dry seasons), and thus suggested that improving the rainfall-runoff model would be critical to obtain more accurate probabilistic forecasts with ESP. This study used two types of Artificial Neural Networks (ANN), such as a Single Neural Network (SNN) and an Ensemble Neural Networks (ENN), to improve the simulation capability of the rainfall-runoff model of the ESP forecasting system for the monthly inflow to the Daecheong dam. Applied for the first time to Korean hydrology, ENN combines the outputs of member models so that it can control the generalization error better than SNN. Because the dry and the flood season in Korea shows considerably different streamflow characteristics, this study calibrated the rainfall-runoff model separately for each season. Therefore, four rainfall-runoff models were developed according to the ANN types and the seasons. This study compared the ANN models with a conceptual rainfall-runoff model called TANK and verified that the ANN models were superior to TANK. Among the ANN models, ENN was more accurate than SNN. The ANN model performance was improved when the model was calibrated separately for the dry and the flood season. The best ANN model developed in this article will be incorporated into the ESP system to increase the forecast capability of ESP for the monthly inflow to the Daecheong dam.

[BibTex]

[BibTex]


no image
Quantitative Cerebral Blood Flow Measurements in the Rat Using a Beta-Probe and H215O

Weber, B., Spaeth, N., Wyss, M., Wild, D., Burger, C., Stanley, R., Buck, A.

Journal of Cerebral Blood Flow and Metabolism, 23(12):1455-1460, December 2003 (article)

Abstract
Beta-probes are a relatively new tool for tracer kinetic studies in animals. They are highly suited to evaluate new positron emission tomography tracers or measure physiologic parameters at rest and after some kind of stimulation or intervention. In many of these experiments, the knowledge of CBF is highly important. Thus, the purpose of this study was to evaluate the method of CBF measurements using a beta-probe and H215O. CBF was measured in the barrel cortex of eight rats at baseline and after acetazolamide challenge. Trigeminal nerve stimulation was additionally performed in five animals. In each category, three injections of 250 to 300 MBq H215O were performed at 10-minute intervals. Data were analyzed using a standard one-tissue compartment model (K1 = CBF, k2 = CBF/p, where p is the partition coefficient). Values for K1 were 0.35 plusminus 0.09, 0.58 plusminus 0.16, and 0.49 plusminus 0.03 mL dot min-1 dot mL-1 at rest, after acetazolamide challenge, and during trigeminal nerve stimulation, respectively. The corresponding values for k2 were 0.55 plusminus 0.12, 0.94 plusminus 0.16, and 0.85 plusminus 0.12 min-7, and for p were 0.64 plusminus 0.05, 0.61 plusminus 0.07, and 0.59 plusminus 0.06.The standard deviation of the difference between two successive experiments, a measure for the reproducibility of the method, was 10.1%, 13.0%, and 5.7% for K1, k2, and p, respectively. In summary, beta-probes in conjunction with H215O allow the reproducible quantitative measurement of CBF, although some systematic underestimation seems to occur, probably because of partial volume effects.

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Blind separation of post-nonlinear mixtures using linearizing transformations and temporal decorrelation

Ziehe, A., Kawanabe, M., Harmeling, S., Müller, K.

Journal of Machine Learning Research, 4(7-8):1319-1338, November 2003 (article)

Abstract
We propose two methods that reduce the post-nonlinear blind source separation problem (PNL-BSS) to a linear BSS problem. The first method is based on the concept of maximal correlation: we apply the alternating conditional expectation (ACE) algorithm--a powerful technique from non-parametric statistics--to approximately invert the componentwise nonlinear functions. The second method is a Gaussianizing transformation, which is motivated by the fact that linearly mixed signals before nonlinear transformation are approximately Gaussian distributed. This heuristic, but simple and efficient procedure works as good as the ACE method. Using the framework provided by ACE, convergence can be proven. The optimal transformations obtained by ACE coincide with the sought-after inverse functions of the nonlinearities. After equalizing the nonlinearities, temporal decorrelation separation (TDSEP) allows us to recover the source signals. Numerical simulations testing "ACE-TD" and "Gauss-TD" on realistic examples are performed with excellent results.

PDF PDF DOI [BibTex]

PDF PDF DOI [BibTex]


no image
Correlated stage- and subfield-associated hippocampal gene expression patterns in experimental and human temporal lobe epilepsy

Becker, A., Chen, J., Zien, A., Sochivko, D., Normann, S., Schramm, J., Elger, C., Wiestler, O., Blumcke, I.

European Journal of Neuroscience, 18(10):2792-2802, November 2003 (article)

Abstract
Epileptic activity evokes profound alterations of hippocampal organization and function. Genomic responses may reflect immediate consequences of excitatory stimulation as well as sustained molecular processes related to neuronal plasticity and structural remodeling. Using oligonucleotide microarrays with 8799 sequences, we determined subregional gene expression profiles in rats subjected to pilocarpine-induced epilepsy (U34A arrays, Affymetrix, Santa Clara, CA, USA; P < 0.05, twofold change, n = 3 per stage). Patterns of gene expression corresponded to distinct stages of epilepsy development. The highest number of differentially expressed genes (dentate gyrus, approx. 400 genes and CA1, approx. 700 genes) was observed 3 days after status epilepticus. The majority of up-regulated genes was associated with mechanisms of cellular stress and injury - 14 days after status epilepticus, numerous transcription factors and genes linked to cytoskeletal and synaptic reorganization were differentially expressed and, in the stage of chronic spontaneous seizures, distinct changes were observed in the transcription of genes involved in various neurotransmission pathways and between animals with low vs. high seizure frequency. A number of genes (n = 18) differentially expressed during the chronic epileptic stage showed corresponding expression patterns in hippocampal subfields of patients with pharmacoresistant temporal lobe epilepsy (n = 5 temporal lobe epilepsy patients; U133A microarrays, Affymetrix; covering 22284 human sequences). These data provide novel insights into the molecular mechanisms of epileptogenesis and seizure-associated cellular and structural remodeling of the hippocampus.

[BibTex]

[BibTex]


no image
Concentration Inequalities for Sub-Additive Functions Using the Entropy Method

Bousquet, O.

Stochastic Inequalities and Applications, 56, pages: 213-247, Progress in Probability, (Editors: Giné, E., C. Houdré and D. Nualart), November 2003 (article)

Abstract
We obtain exponential concentration inequalities for sub-additive functions of independent random variables under weak conditions on the increments of those functions, like the existence of exponential moments for these increments. As a consequence of these general inequalities, we obtain refinements of Talagrand's inequality for empirical processes and new bounds for randomized empirical processes. These results are obtained by further developing the entropy method introduced by Ledoux.

PostScript [BibTex]

PostScript [BibTex]


no image
Image Reconstruction by Linear Programming

Tsuda, K., Rätsch, G.

(118), Max Planck Institute for Biological Cybernetics, Tübingen, Germany, October 2003 (techreport)

PDF [BibTex]

PDF [BibTex]


no image
Technical report on Separation methods for nonlinear mixtures

Jutten, C., Karhunen, J., Almeida, L., Harmeling, S.

(D29), EU-Project BLISS, October 2003 (techreport)

PDF [BibTex]

PDF [BibTex]


no image
YKL-39 (chitinase 3-like protein 2), but not YKL-40 (chitinase 3-like protein 1), is up regulated in osteoarthritic chondrocytes

Knorr, T., Obermayr, F., Bartnik, E., Zien, A., Aigner, T.

Annals of the Rheumatic Diseases, 62(10):995-998, October 2003 (article)

Abstract
OBJECTIVE: To investigate quantitatively the mRNA expression levels of YKL-40, an established marker of rheumatoid and osteoarthritic cartilage degeneration in synovial fluid and serum, and a closely related molecule YKL-39, in articular chondrocytes. METHODS: cDNA array and online quantitative polymerase chain reaction (PCR) were used to measure mRNA expression levels of YKL-39 and YKL-40 in chondrocytes in normal, early degenerative, and late stage osteoarthritic cartilage samples. RESULTS: Expression analysis showed high levels of both proteins in normal articular chondrocytes, with lower levels of YKL-39 than YKL-40. Whereas YKL-40 was significantly down regulated in late stage osteoarthritic chondrocytes, YKL-39 was significantly up regulated. In vitro both YKLs were down regulated by interleukin 1beta. CONCLUSIONS: The up regulation of YKL-39 in osteoarthritic cartilage suggests that YKL-39 may be a more accurate marker of chondrocyte activation than YKL-40, although it has yet to be established as a suitable marker in synovial fluid and serum. The decreased expression of YKL-40 by osteoarthritic chondrocytes is surprising as increased levels have been reported in rheumatoid and osteoarthritic synovial fluid, where it may derive from activated synovial cells or osteophytic tissue or by increased matrix destruction in the osteoarthritic joint. YKL-39 and YKL-40 are potentially interesting marker molecules for arthritic joint disease because they are abundantly expressed by both normal and osteoarthritic chondrocytes.

[BibTex]

[BibTex]


no image
Technical report on implementation of linear methods and validation on acoustic sources

Harmeling, S., Bünau, P., Ziehe, A., Pham, D.

EU-Project BLISS, September 2003 (techreport)

PDF [BibTex]

PDF [BibTex]


no image
Statistical Learning Theory, Capacity and Complexity

Schölkopf, B.

Complexity, 8(4):87-94, July 2003 (article)

Abstract
We give an exposition of the ideas of statistical learning theory, followed by a discussion of how a reinterpretation of the insights of learning theory could potentially also benefit our understanding of a certain notion of complexity.

Web DOI [BibTex]


no image
Ranking on Data Manifolds

Zhou, D., Weston, J., Gretton, A., Bousquet, O., Schölkopf, B.

(113), Max Planck Institute for Biological Cybernetics, 72076 Tuebingen, Germany, June 2003 (techreport)

Abstract
The Google search engine has had a huge success with its PageRank web page ranking algorithm, which exploits global, rather than local, hyperlink structure of the World Wide Web using random walk. This algorithm can only be used for graph data, however. Here we propose a simple universal ranking algorithm for vectorial data, based on the exploration of the intrinsic global geometric structure revealed by a huge amount of data. Experimental results from image and text to bioinformatics illustrates the validity of our algorithm.

PDF [BibTex]

PDF [BibTex]


no image
Kernel Hebbian Algorithm for Iterative Kernel Principal Component Analysis

Kim, K., Franz, M., Schölkopf, B.

(109), MPI f. biologische Kybernetik, Tuebingen, June 2003 (techreport)

Abstract
A new method for performing a kernel principal component analysis is proposed. By kernelizing the generalized Hebbian algorithm, one can iteratively estimate the principal components in a reproducing kernel Hilbert space with only linear order memory complexity. The derivation of the method, a convergence proof, and preliminary applications in image hyperresolution are presented. In addition, we discuss the extension of the method to the online learning of kernel principal components.

PDF [BibTex]

PDF [BibTex]


no image
Learning with Local and Global Consistency

Zhou, D., Bousquet, O., Lal, T., Weston, J., Schölkopf, B.

(112), Max Planck Institute for Biological Cybernetics, Tuebingen, Germany, June 2003 (techreport)

Abstract
We consider the learning problem in the transductive setting. Given a set of points of which only some are labeled, the goal is to predict the label of the unlabeled points. A principled clue to solve such a learning problem is the consistency assumption that a classifying function should be sufficiently smooth with respect to the structure revealed by these known labeled and unlabeled points. We present a simple algorithm to obtain such a smooth solution. Our method yields encouraging experimental results on a number of classification problems and demonstrates effective use of unlabeled data.

[BibTex]

[BibTex]


no image
Dealing with large Diagonals in Kernel Matrices

Weston, J., Schölkopf, B., Eskin, E., Leslie, C., Noble, W.

Annals of the Institute of Statistical Mathematics, 55(2):391-408, June 2003 (article)

Abstract
In kernel methods, all the information about the training data is contained in the Gram matrix. If this matrix has large diagonal values, which arises for many types of kernels, then kernel methods do not perform well: We propose and test several methods for dealing with this problem by reducing the dynamic range of the matrix while preserving the positive definiteness of the Hessian of the quadratic programming problem that one has to solve when training a Support Vector Machine, which is a common kernel approach for pattern recognition.

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Implicit Wiener Series

Franz, M., Schölkopf, B.

(114), Max Planck Institute for Biological Cybernetics, June 2003 (techreport)

Abstract
The Wiener series is one of the standard methods to systematically characterize the nonlinearity of a neural system. The classical estimation method of the expansion coefficients via cross-correlation suffers from severe problems that prevent its application to high-dimensional and strongly nonlinear systems. We propose a new estimation method based on regression in a reproducing kernel Hilbert space that overcomes these problems. Numerical experiments show performance advantages in terms of convergence, interpretability and system size that can be handled.

PDF [BibTex]

PDF [BibTex]


no image
Machine Learning approaches to protein ranking: discriminative, semi-supervised, scalable algorithms

Weston, J., Leslie, C., Elisseeff, A., Noble, W.

(111), Max Planck Institute for Biological Cybernetics, Tübingen, Germany, June 2003 (techreport)

Abstract
A key tool in protein function discovery is the ability to rank databases of proteins given a query amino acid sequence. The most successful method so far is a web-based tool called PSI-BLAST which uses heuristic alignment of a profile built using the large unlabeled database. It has been shown that such use of global information via an unlabeled data improves over a local measure derived from a basic pairwise alignment such as performed by PSI-BLAST's predecessor, BLAST. In this article we look at ways of leveraging techniques from the field of machine learning for the problem of ranking. We show how clustering and semi-supervised learning techniques, which aim to capture global structure in data, can significantly improve over PSI-BLAST.

PDF [BibTex]

PDF [BibTex]


no image
The Metric Nearness Problem with Applications

Dhillon, I., Sra, S., Tropp, J.

Univ. of Texas at Austin, June 2003 (techreport)

GZIP [BibTex]

GZIP [BibTex]


no image
The em Algorithm for Kernel Matrix Completion with Auxiliary Data

Tsuda, K., Akaho, S., Asai, K.

Journal of Machine Learning Research, 4, pages: 67-81, May 2003 (article)

PDF [BibTex]

PDF [BibTex]


no image
Constructing Descriptive and Discriminative Non-linear Features: Rayleigh Coefficients in Kernel Feature Spaces

Mika, S., Rätsch, G., Weston, J., Schölkopf, B., Smola, A., Müller, K.

IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(5):623-628, May 2003 (article)

Abstract
We incorporate prior knowledge to construct nonlinear algorithms for invariant feature extraction and discrimination. Employing a unified framework in terms of a nonlinearized variant of the Rayleigh coefficient, we propose nonlinear generalizations of Fisher‘s discriminant and oriented PCA using support vector kernel functions. Extensive simulations show the utility of our approach.

DOI [BibTex]

DOI [BibTex]


no image
The Geometry Of Kernel Canonical Correlation Analysis

Kuss, M., Graepel, T.

(108), Max Planck Institute for Biological Cybernetics, Tübingen, Germany, May 2003 (techreport)

Abstract
Canonical correlation analysis (CCA) is a classical multivariate method concerned with describing linear dependencies between sets of variables. After a short exposition of the linear sample CCA problem and its analytical solution, the article proceeds with a detailed characterization of its geometry. Projection operators are used to illustrate the relations between canonical vectors and variates. The article then addresses the problem of CCA between spaces spanned by objects mapped into kernel feature spaces. An exact solution for this kernel canonical correlation (KCCA) problem is derived from a geometric point of view. It shows that the expansion coefficients of the canonical vectors in their respective feature space can be found by linear CCA in the basis induced by kernel principal component analysis. The effect of mappings into higher dimensional feature spaces is considered critically since it simplifies the CCA problem in general. Then two regularized variants of KCCA are discussed. Relations to other methods are illustrated, e.g., multicategory kernel Fisher discriminant analysis, kernel principal component regression and possible applications thereof in blind source separation.

PDF [BibTex]

PDF [BibTex]


no image
Kernel-based nonlinear blind source separation

Harmeling, S., Ziehe, A., Kawanabe, M., Müller, K.

Neural Computation, 15(5):1089-1124, May 2003 (article)

Abstract
We propose kTDSEP, a kernel-based algorithm for nonlinear blind source separation (BSS). It combines complementary research fields: kernel feature spaces and BSS using temporal information. This yields an efficient algorithm for nonlinear BSS with invertible nonlinearity. Key assumptions are that the kernel feature space is chosen rich enough to approximate the nonlinearity and that signals of interest contain temporal information. Both assumptions are fulfilled for a wide set of real-world applications. The algorithm works as follows: First, the data are (implicitly) mapped to a high (possibly infinite)—dimensional kernel feature space. In practice, however, the data form a smaller submanifold in feature space—even smaller than the number of training data points—a fact that has already been used by, for example, reduced set techniques for support vector machines. We propose to adapt to this effective dimension as a preprocessing step and to construct an orthonormal basis of this submanifold. The latter dimension-reduction step is essential for making the subsequent application of BSS methods computationally and numerically tractable. In the reduced space, we use a BSS algorithm that is based on second-order temporal decorrelation. Finally, we propose a selection procedure to obtain the original sources from the extracted nonlinear components automatically. Experiments demonstrate the excellent performance and efficiency of our kTDSEP algorithm for several problems of nonlinear BSS and for more than two sources.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
The Kernel Mutual Information

Gretton, A., Herbrich, R., Smola, A.

Max Planck Institute for Biological Cybernetics, April 2003 (techreport)

Abstract
We introduce two new functions, the kernel covariance (KC) and the kernel mutual information (KMI), to measure the degree of independence of several continuous random variables. The former is guaranteed to be zero if and only if the random variables are pairwise independent; the latter shares this property, and is in addition an approximate upper bound on the mutual information, as measured near independence, and is based on a kernel density estimate. We show that Bach and Jordan‘s kernel generalised variance (KGV) is also an upper bound on the same kernel density estimate, but is looser. Finally, we suggest that the addition of a regularising term in the KGV causes it to approach the KMI, which motivates the introduction of this regularisation. The performance of the KC and KMI is verified in the context of instantaneous independent component analysis (ICA), by recovering both artificial and real (musical) signals following linear mixing.

PostScript [BibTex]

PostScript [BibTex]


no image
Tractable Inference for Probabilistic Data Models

Csato, L., Opper, M., Winther, O.

Complexity, 8(4):64-68, April 2003 (article)

Abstract
We present an approximation technique for probabilistic data models with a large number of hidden variables, based on ideas from statistical physics. We give examples for two nontrivial applications. © 2003 Wiley Periodicals, Inc.

PDF GZIP Web [BibTex]

PDF GZIP Web [BibTex]


no image
Feature selection and transduction for prediction of molecular bioactivity for drug design

Weston, J., Perez-Cruz, F., Bousquet, O., Chapelle, O., Elisseeff, A., Schölkopf, B.

Bioinformatics, 19(6):764-771, April 2003 (article)

Abstract
Motivation: In drug discovery a key task is to identify characteristics that separate active (binding) compounds from inactive (non-binding) ones. An automated prediction system can help reduce resources necessary to carry out this task. Results: Two methods for prediction of molecular bioactivity for drug design are introduced and shown to perform well in a data set previously studied as part of the KDD (Knowledge Discovery and Data Mining) Cup 2001. The data is characterized by very few positive examples, a very large number of features (describing three-dimensional properties of the molecules) and rather different distributions between training and test data. Two techniques are introduced specifically to tackle these problems: a feature selection method for unbalanced data and a classifier which adapts to the distribution of the the unlabeled test data (a so-called transductive method). We show both techniques improve identification performance and in conjunction provide an improvement over using only one of the techniques. Our results suggest the importance of taking into account the characteristics in this data which may also be relevant in other problems of a similar type.

Web [BibTex]


no image
Use of the Zero-Norm with Linear Models and Kernel Methods

Weston, J., Elisseeff, A., Schölkopf, B., Tipping, M.

Journal of Machine Learning Research, 3, pages: 1439-1461, March 2003 (article)

Abstract
We explore the use of the so-called zero-norm of the parameters of linear models in learning. Minimization of such a quantity has many uses in a machine learning context: for variable or feature selection, minimizing training error and ensuring sparsity in solutions. We derive a simple but practical method for achieving these goals and discuss its relationship to existing techniques of minimizing the zero-norm. The method boils down to implementing a simple modification of vanilla SVM, namely via an iterative multiplicative rescaling of the training data. Applications we investigate which aid our discussion include variable and feature selection on biological microarray data, and multicategory classification.

PDF PostScript PDF [BibTex]

PDF PostScript PDF [BibTex]


no image
Expectation Maximization for Clustering on Hyperspheres

Banerjee, A., Dhillon, I., Ghosh, J., Sra, S.

Univ. of Texas at Austin, February 2003 (techreport)

GZIP [BibTex]

GZIP [BibTex]


no image
Modeling Data using Directional Distributions

Dhillon, I., Sra, S.

Univ. of Texas at Austin, January 2003 (techreport)

GZIP [BibTex]

GZIP [BibTex]


no image
An Introduction to Variable and Feature Selection.

Guyon, I., Elisseeff, A.

Journal of Machine Learning, 3, pages: 1157-1182, 2003 (article)

[BibTex]

[BibTex]


no image
A Note on Parameter Tuning for On-Line Shifting Algorithms

Bousquet, O.

Max Planck Institute for Biological Cybernetics, Tübingen, Germany, 2003 (techreport)

Abstract
In this short note, building on ideas of M. Herbster [2] we propose a method for automatically tuning the parameter of the FIXED-SHARE algorithm proposed by Herbster and Warmuth [3] in the context of on-line learning with shifting experts. We show that this can be done with a memory requirement of $O(nT)$ and that the additional loss incurred by the tuning is the same as the loss incurred for estimating the parameter of a Bernoulli random variable.

PDF PostScript [BibTex]

PDF PostScript [BibTex]