Header logo is ei


2016


no image
Nonlinear functional causal models for distinguishing cause from effect

Zhang, K., Hyvärinen, A.

In Statistics and Causality: Methods for Applied Empirical Research, pages: 185-201, 8, 1st, (Editors: Wolfgang Wiedermann and Alexander von Eye), John Wiley & Sons, Inc., 2016 (inbook)

[BibTex]

2016

[BibTex]


no image
Analysis of multiparametric MRI using a semi-supervised random forest framework allows the detection of therapy response in ischemic stroke

Castaneda, S., Katiyar, P., Russo, F., Calaminus, C., Disselhorst, J. A., Ziemann, U., Kohlhofer, U., Quintanilla-Martinez, L., Poli, S., Pichler, B. J.

World Molecular Imaging Conference, 2016 (talk)

link (url) [BibTex]

link (url) [BibTex]


no image
A cognitive brain–computer interface for patients with amyotrophic lateral sclerosis

Hohmann, M., Fomina, T., Jayaram, V., Widmann, N., Förster, C., Just, J., Synofzik, M., Schölkopf, B., Schöls, L., Grosse-Wentrup, M.

In Brain-Computer Interfaces: Lab Experiments to Real-World Applications, 228(Supplement C):221-239, 8, Progress in Brain Research, (Editors: Damien Coyle), Elsevier, 2016 (incollection)

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Multi-view learning on multiparametric PET/MRI quantifies intratumoral heterogeneity and determines therapy efficacy

Katiyar, P., Divine, M. R., Kohlhofer, U., Quintanilla-Martinez, L., Siegemund, M., Pfizenmaier, K., Kontermann, R., Pichler, B. J., Disselhorst, J. A.

World Molecular Imaging Conference, 2016 (talk)

link (url) [BibTex]

link (url) [BibTex]

2015


no image
Causal Inference for Empirical Time Series Based on the Postulate of Independence of Cause and Mechanism

Besserve, M.

53rd Annual Allerton Conference on Communication, Control, and Computing, September 2015 (talk)

[BibTex]

2015

[BibTex]


no image
Kernel methods in medical imaging

Charpiat, G., Hofmann, M., Schölkopf, B.

In Handbook of Biomedical Imaging, pages: 63-81, 4, (Editors: Paragios, N., Duncan, J. and Ayache, N.), Springer, Berlin, Germany, June 2015 (inbook)

Web link (url) [BibTex]

Web link (url) [BibTex]


no image
Independence of cause and mechanism in brain networks

Besserve, M.

DALI workshop on Networks: Processes and Causality, April 2015 (talk)

[BibTex]

[BibTex]


no image
Information-Theoretic Implications of Classical and Quantum Causal Structures

Chaves, R., Majenz, C., Luft, L., Maciel, T., Janzing, D., Schölkopf, B., Gross, D.

18th Conference on Quantum Information Processing (QIP), 2015 (talk)

Web link (url) [BibTex]

Web link (url) [BibTex]


no image
Assessment of brain tissue damage in the Sub-Acute Stroke Region by Multiparametric Imaging using [89-Zr]-Desferal-EPO-PET/MRI

Castaneda, S. G., Katiyar, P., Russo, F., Disselhorst, J. A., Calaminus, C., Poli, S., Maurer, A., Ziemann, U., Pichler, B. J.

World Molecular Imaging Conference, 2015 (talk)

[BibTex]

[BibTex]


no image
Statistical and Machine Learning Methods for Neuroimaging: Examples, Challenges, and Extensions to Diffusion Imaging Data

O’Donnell, L. J., Schultz, T.

In Visualization and Processing of Higher Order Descriptors for Multi-Valued Data, pages: 299-319, (Editors: Hotz, I. and Schultz, T.), Springer, 2015 (inbook)

[BibTex]

[BibTex]


no image
Early time point in vivo PET/MR is a promising biomarker for determining efficacy of a novel Db(\alphaEGFR)-scTRAIL fusion protein therapy in a colon cancer model

Divine, M. R., Harant, M., Katiyar, P., Disselhorst, J. A., Bukala, D., Aidone, S., Siegemund, M., Pfizenmaier, K., Kontermann, R., Pichler, B. J.

World Molecular Imaging Conference, 2015 (talk)

[BibTex]

[BibTex]


no image
Justifying Information-Geometric Causal Inference

Janzing, D., Steudel, B., Shajarisales, N., Schölkopf, B.

In Measures of Complexity: Festschrift for Alexey Chervonenkis, pages: 253-265, 18, (Editors: Vovk, V., Papadopoulos, H. and Gammerman, A.), Springer, 2015 (inbook)

DOI [BibTex]

DOI [BibTex]


no image
The search for single exoplanet transits in the Kepler light curves

Foreman-Mackey, D., Hogg, D. W., Schölkopf, B.

IAU General Assembly, 22, pages: 2258352, 2015 (talk)

link (url) [BibTex]

link (url) [BibTex]

2009


no image
Machine Learning for Brain-Computer Interfaces

Hill, NJ.

Mini-Symposia on Assistive Machine Learning for People with Disabilities at NIPS (AMD), December 2009 (talk)

Abstract
Brain-computer interfaces (BCI) aim to be the ultimate in assistive technology: decoding a user‘s intentions directly from brain signals without involving any muscles or peripheral nerves. Thus, some classes of BCI potentially offer hope for users with even the most extreme cases of paralysis, such as in late-stage Amyotrophic Lateral Sclerosis, where nothing else currently allows communication of any kind. Other lines in BCI research aim to restore lost motor function in as natural a way as possible, reconnecting and in some cases re-training motor-cortical areas to control prosthetic, or previously paretic, limbs. Research and development are progressing on both invasive and non-invasive fronts, although BCI has yet to make a breakthrough to widespread clinical application. The high-noise high-dimensional nature of brain-signals, particularly in non-invasive approaches and in patient populations, make robust decoding techniques a necessity. Generally, the approach has been to use relatively simple feature extraction techniques, such as template matching and band-power estimation, coupled to simple linear classifiers. This has led to a prevailing view among applied BCI researchers that (sophisticated) machine-learning is irrelevant since "it doesn‘t matter what classifier you use once you‘ve done your preprocessing right and extracted the right features." I shall show a few examples of how this runs counter to both the empirical reality and the spirit of what needs to be done to bring BCI into clinical application. Along the way I‘ll highlight some of the interesting problems that remain open for machine-learners.

PDF Web Web [BibTex]

2009

PDF Web Web [BibTex]


no image
PAC-Bayesian Approach to Formulation of Clustering Objectives

Seldin, Y.

NIPS Workshop on "Clustering: Science or Art? Towards Principled Approaches", December 2009 (talk)

Abstract
Clustering is a widely used tool for exploratory data analysis. However, the theoretical understanding of clustering is very limited. We still do not have a well-founded answer to the seemingly simple question of "how many clusters are present in the data?", and furthermore a formal comparison of clusterings based on different optimization objectives is far beyond our abilities. The lack of good theoretical support gives rise to multiple heuristics that confuse the practitioners and stall development of the field. We suggest that the ill-posed nature of clustering problems is caused by the fact that clustering is often taken out of its subsequent application context. We argue that one does not cluster the data just for the sake of clustering it, but rather to facilitate the solution of some higher level task. By evaluation of the clustering‘s contribution to the solution of the higher level task it is possible to compare different clusterings, even those obtained by different optimization objectives. In the preceding work it was shown that such an approach can be applied to evaluation and design of co-clustering solutions. Here we suggest that this approach can be extended to other settings, where clustering is applied.

PDF Web Web [BibTex]

PDF Web Web [BibTex]


no image
Semi-supervised Kernel Canonical Correlation Analysis of Human Functional Magnetic Resonance Imaging Data

Shelton, JA.

Women in Machine Learning Workshop (WiML), December 2009 (talk)

Abstract
Kernel Canonical Correlation Analysis (KCCA) is a general technique for subspace learning that incorporates principal components analysis (PCA) and Fisher linear discriminant analysis (LDA) as special cases. By finding directions that maximize correlation, KCCA learns representations tied more closely to underlying process generating the the data and can ignore high-variance noise directions. However, for data where acquisition in a given modality is expensive or otherwise limited, KCCA may suffer from small sample effects. We propose to use semi-supervised Laplacian regularization to utilize data that are present in only one modality. This manifold learning approach is able to find highly correlated directions that also lie along the data manifold, resulting in a more robust estimate of correlated subspaces. Functional magnetic resonance imaging (fMRI) acquired data are naturally amenable to subspace techniques as data are well aligned and such data of the human brain are a particularly interesting candidate. In this study we implemented various supervised and semi-supervised versions of KCCA on human fMRI data, with regression to single and multivariate labels (corresponding to video content subjects viewed during the image acquisition). In each variate condition, Laplacian regularization improved performance whereas the semi-supervised variants of KCCA yielded the best performance. We additionally analyze the weights learned by the regression in order to infer brain regions that are important during different types of visual processing.

PDF Web [BibTex]


no image
Event-Related Potentials in Brain-Computer Interfacing

Hill, NJ.

Invited lecture on the bachelor & masters course "Introduction to Brain-Computer Interfacing", October 2009 (talk)

Abstract
An introduction to event-related potentials with specific reference to their use in brain-computer interfacing applications and research.

PDF [BibTex]

PDF [BibTex]


no image
BCI2000 and Python

Hill, NJ.

Invited lecture at the 5th International BCI2000 Workshop, October 2009 (talk)

Abstract
A tutorial, with exercises, on how to integrate your own Python code with the BCI2000 software package.

PDF [BibTex]

PDF [BibTex]


no image
Implementing a Signal Processing Filter in BCI2000 Using C++

Hill, NJ., Mellinger, J.

Invited lecture at the 5th International BCI2000 Workshop, October 2009 (talk)

Abstract
This tutorial shows how the functionality of the BCI2000 software package can be extended with one‘s own code, using BCI2000‘s C++ API.

PDF [BibTex]

PDF [BibTex]


no image
Toward a Theory of Consciousness

Tononi, G., Balduzzi, D.

In The Cognitive Neurosciences, pages: 1201-1220, (Editors: Gazzaniga, M.S.), MIT Press, Cambridge, MA, USA, October 2009 (inbook)

Web [BibTex]

Web [BibTex]


no image
Randomized algorithms for statistical image analysis based on percolation theory

Davies, P., Langovoy, M., Wittich, O.

27th European Meeting of Statisticians (EMS), July 2009 (talk)

Abstract
We propose a novel probabilistic method for detection of signals and reconstruction of images in the presence of random noise. The method uses results from percolation and random graph theories (see Grimmett (1999)). We address the problem of detection and estimation of signals in situations where the signal-to-noise ratio is particularly low. We present an algorithm that allows to detect objects of various shapes in noisy images. The algorithm has linear complexity and exponential accuracy. Our algorithm substantially di ers from wavelets-based algorithms (see Arias-Castro et.al. (2005)). Moreover, we present an algorithm that produces a crude estimate of an object based on the noisy picture. This algorithm also has linear complexity and is appropriate for real-time systems. We prove results on consistency and algorithmic complexity of our procedures.

Web PDF [BibTex]

Web PDF [BibTex]


no image
Learning Motor Primitives for Robotics

Kober, J., Peters, J., Oztop, E.

Advanced Telecommunications Research Center ATR, June 2009 (talk)

Abstract
The acquisition and self-improvement of novel motor skills is among the most important problems in robotics. Motor primitives offer one of the most promising frameworks for the application of machine learning techniques in this context. Employing the Dynamic Systems Motor primitives originally introduced by Ijspeert et al. (2003), appropriate learning algorithms for a concerted approach of both imitation and reinforcement learning are presented. Using these algorithms new motor skills, i.e., Ball-in-a-Cup, Ball-Paddling and Dart-Throwing, are learned.

[BibTex]

[BibTex]


no image
Text Clustering with Mixture of von Mises-Fisher Distributions

Sra, S., Banerjee, A., Ghosh, J., Dhillon, I.

In Text mining: classification, clustering, and applications, pages: 121-161, Chapman & Hall/CRC data mining and knowledge discovery series, (Editors: Srivastava, A. N. and Sahami, M.), CRC Press, Boca Raton, FL, USA, June 2009 (inbook)

Web DOI [BibTex]

Web DOI [BibTex]


no image
Learning To Detect Unseen Object Classes by Between-Class Attribute Transfer

Lampert, C.

IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), June 2009 (talk)

Web [BibTex]

Web [BibTex]


no image
Data Mining for Biologists

Tsuda, K.

In Biological Data Mining in Protein Interaction Networks, pages: 14-27, (Editors: Li, X. and Ng, S.-K.), Medical Information Science Reference, Hershey, PA, USA, May 2009 (inbook)

Abstract
In this tutorial chapter, we review basics about frequent pattern mining algorithms, including itemset mining, association rule mining and graph mining. These algorithms can find frequently appearing substructures in discrete data. They can discover structural motifs, for example, from mutation data, protein structures and chemical compounds. As they have been primarily used for business data, biological applications are not so common yet, but their potential impact would be large. Recent advances in computers including multicore machines and ever increasing memory capacity support the application of such methods to larger datasets. We explain technical aspects of the algorithms, but do not go into details. Current biological applications are summarized and possible future directions are given.

Web [BibTex]

Web [BibTex]


no image
Large Margin Methods for Part of Speech Tagging

Altun, Y.

In Automatic Speech and Speaker Recognition: Large Margin and Kernel Methods, pages: 141-160, (Editors: Keshet, J. and Bengio, S.), Wiley, Hoboken, NJ, USA, January 2009 (inbook)

Web [BibTex]

Web [BibTex]


no image
Covariate shift and local learning by distribution matching

Gretton, A., Smola, A., Huang, J., Schmittfull, M., Borgwardt, K., Schölkopf, B.

In Dataset Shift in Machine Learning, pages: 131-160, (Editors: Quiñonero-Candela, J., Sugiyama, M., Schwaighofer, A. and Lawrence, N. D.), MIT Press, Cambridge, MA, USA, 2009 (inbook)

Abstract
Given sets of observations of training and test data, we consider the problem of re-weighting the training data such that its distribution more closely matches that of the test data. We achieve this goal by matching covariate distributions between training and test sets in a high dimensional feature space (specifically, a reproducing kernel Hilbert space). This approach does not require distribution estimation. Instead, the sample weights are obtained by a simple quadratic programming procedure. We provide a uniform convergence bound on the distance between the reweighted training feature mean and the test feature mean, a transductive bound on the expected loss of an algorithm trained on the reweighted data, and a connection to single class SVMs. While our method is designed to deal with the case of simple covariate shift (in the sense of Chapter ??), we have also found benefits for sample selection bias on the labels. Our correction procedure yields its greatest and most consistent advantages when the learning algorithm returns a classifier/regressor that is simpler" than the data might suggest.

PDF Web [BibTex]

PDF Web [BibTex]


no image
An introduction to Kernel Learning Algorithms

Gehler, P., Schölkopf, B.

In Kernel Methods for Remote Sensing Data Analysis, pages: 25-48, 2, (Editors: Gustavo Camps-Valls and Lorenzo Bruzzone), Wiley, New York, NY, USA, 2009 (inbook)

Abstract
Kernel learning algorithms are currently becoming a standard tool in the area of machine learning and pattern recognition. In this chapter we review the fundamental theory of kernel learning. As the basic building block we introduce the kernel function, which provides an elegant and general way to compare possibly very complex objects. We then review the concept of a reproducing kernel Hilbert space and state the representer theorem. Finally we give an overview of the most prominent algorithms, which are support vector classification and regression, Gaussian Processes and kernel principal analysis. With multiple kernel learning and structured output prediction we also introduce some more recent advancements in the field.

link (url) DOI [BibTex]

link (url) DOI [BibTex]

2007


no image
Reaction graph kernels for discovering missing enzymes in the plant secondary metabolism

Saigo, H., Hattori, M., Tsuda, K.

NIPS Workshop on Machine Learning in Computational Biology, December 2007 (talk)

Abstract
Secondary metabolic pathway in plant is important for finding druggable candidate enzymes. However, there are many enzymes whose functions are still undiscovered especially in organism-specific metabolic pathways. We propose reaction graph kernels for automatically assigning the EC numbers to unknown enzymatic reactions in a metabolic network. Experiments are carried out on KEGG/REACTION database and our method successfully predicted the first three digits of the EC number with 83% accuracy.We also exhaustively predicted missing enzymatic functions in the plant secondary metabolism pathways, and evaluated our results in biochemical validity.

Web [BibTex]

2007

Web [BibTex]


no image
Positional Oligomer Importance Matrices

Sonnenburg, S., Zien, A., Philips, P., Rätsch, G.

NIPS Workshop on Machine Learning in Computational Biology, December 2007 (talk)

Abstract
At the heart of many important bioinformatics problems, such as gene finding and function prediction, is the classification of biological sequences, above all of DNA and proteins. In many cases, the most accurate classifiers are obtained by training SVMs with complex sequence kernels, for instance for transcription starts or splice sites. However, an often criticized downside of SVMs with complex kernels is that it is very hard for humans to understand the learned decision rules and to derive biological insights from them. To close this gap, we introduce the concept of positional oligomer importance matrices (POIMs) and develop an efficient algorithm for their computation. We demonstrate how they overcome the limitations of sequence logos, and how they can be used to find relevant motifs for different biological phenomena in a straight-forward way. Note that the concept of POIMs is not limited to interpreting SVMs, but is applicable to general k−mer based scoring systems.

Web [BibTex]

Web [BibTex]


no image
Machine Learning Algorithms for Polymorphism Detection

Schweikert, G., Zeller, G., Weigel, D., Schölkopf, B., Rätsch, G.

NIPS Workshop on Machine Learning in Computational Biology, December 2007 (talk)

Web [BibTex]

Web [BibTex]


no image
An Automated Combination of Kernels for Predicting Protein Subcellular Localization

Zien, A., Ong, C.

NIPS Workshop on Machine Learning in Computational Biology, December 2007 (talk)

Abstract
Protein subcellular localization is a crucial ingredient to many important inferences about cellular processes, including prediction of protein function and protein interactions.We propose a new class of protein sequence kernels which considers all motifs including motifs with gaps. This class of kernels allows the inclusion of pairwise amino acid distances into their computation. We utilize an extension of the multiclass support vector machine (SVM)method which directly solves protein subcellular localization without resorting to the common approach of splitting the problem into several binary classification problems. To automatically search over families of possible amino acid motifs, we optimize over multiple kernels at the same time. We compare our automated approach to four other predictors on three different datasets, and show that we perform better than the current state of the art. Furthermore, our method provides some insights as to which features are most useful for determining subcellular localization, which are in agreement with biological reasoning.

Web [BibTex]

Web [BibTex]


no image
Challenges in Brain-Computer Interface Development: Induction, Measurement, Decoding, Integration

Hill, NJ.

Invited keynote talk at the launch of BrainGain, the Dutch BCI research consortium, November 2007 (talk)

Abstract
I‘ll present a perspective on Brain-Computer Interface development from T{\"u}bingen. Some of the benefits promised by BCI technology lie in the near foreseeable future, and some further away. Our motivation is to make BCI technology feasible for the people who could benefit from what it has to offer soon: namely, people in the "completely locked-in" state. I‘ll mention some of the challenges of working with this user group, and explain the specific directions they have motivated us to take in developing experimental methods, algorithms, and software.

[BibTex]

[BibTex]


no image
Policy Learning for Robotics

Peters, J.

14th International Conference on Neural Information Processing (ICONIP), November 2007 (talk)

Web [BibTex]

Web [BibTex]


no image
Hilbert Space Representations of Probability Distributions

Gretton, A.

2nd Workshop on Machine Learning and Optimization at the ISM, October 2007 (talk)

Abstract
Many problems in unsupervised learning require the analysis of features of probability distributions. At the most fundamental level, we might wish to determine whether two distributions are the same, based on samples from each - this is known as the two-sample or homogeneity problem. We use kernel methods to address this problem, by mapping probability distributions to elements in a reproducing kernel Hilbert space (RKHS). Given a sufficiently rich RKHS, these representations are unique: thus comparing feature space representations allows us to compare distributions without ambiguity. Applications include testing whether cancer subtypes are distinguishable on the basis of DNA microarray data, and whether low frequency oscillations measured at an electrode in the cortex have a different distribution during a neural spike. A more difficult problem is to discover whether two random variables drawn from a joint distribution are independent. It turns out that any dependence between pairs of random variables can be encoded in a cross-covariance operator between appropriate RKHS representations of the variables, and we may test independence by looking at a norm of the operator. We demonstrate this independence test by establishing dependence between an English text and its French translation, as opposed to French text on the same topic but otherwise unrelated. Finally, we show that this operator norm is itself a difference in feature means.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Regression with Intervals

Kashima, H., Yamazaki, K., Saigo, H., Inokuchi, A.

International Workshop on Data-Mining and Statistical Science (DMSS2007), October 2007, JSAI Incentive Award. Talk was given by Hisashi Kashima. (talk)

Web [BibTex]

Web [BibTex]


no image
Support Vector Machine Learning for Interdependent and Structured Output Spaces

Altun, Y., Hofmann, T., Tsochantaridis, I.

In Predicting Structured Data, pages: 85-104, Advances in neural information processing systems, (Editors: Bakir, G. H. , T. Hofmann, B. Schölkopf, A. J. Smola, B. Taskar, S. V. N. Vishwanathan), MIT Press, Cambridge, MA, USA, September 2007 (inbook)

Web [BibTex]

Web [BibTex]


no image
Brisk Kernel ICA

Jegelka, S., Gretton, A.

In Large Scale Kernel Machines, pages: 225-250, Neural Information Processing, (Editors: Bottou, L. , O. Chapelle, D. DeCoste, J. Weston), MIT Press, Cambridge, MA, USA, September 2007 (inbook)

Abstract
Recent approaches to independent component analysis have used kernel independence measures to obtain very good performance in ICA, particularly in areas where classical methods experience difficulty (for instance, sources with near-zero kurtosis). In this chapter, we compare two efficient extensions of these methods for large-scale problems: random subsampling of entries in the Gram matrices used in defining the independence measures, and incomplete Cholesky decomposition of these matrices. We derive closed-form, efficiently computable approximations for the gradients of these measures, and compare their performance on ICA using both artificial and music data. We show that kernel ICA can scale up to much larger problems than yet attempted, and that incomplete Cholesky decomposition performs better than random sampling.

PDF Web [BibTex]

PDF Web [BibTex]


no image
MR-Based PET Attenuation Correction: Method and Validation

Hofmann, M., Steinke, F., Scheel, V., Brady, M., Schölkopf, B., Pichler, B.

Joint Molecular Imaging Conference, September 2007 (talk)

Abstract
PET/MR combines the high soft tissue contrast of Magnetic Resonance Imaging (MRI) and the functional information of Positron Emission Tomography (PET). For quantitative PET information, correction of tissue photon attenuation is mandatory. Usually in conventional PET, the attenuation map is obtained from a transmission scan, which uses a rotating source, or from the CT scan in case of combined PET/CT. In the case of a PET/MR scanner, there is insufficient space for the rotating source and ideally one would want to calculate the attenuation map from the MR image instead. Since MR images provide information about proton density of the different tissue types, it is not trivial to use this data for PET attenuation correction. We present a method for predicting the PET attenuation map from a given the MR image, using a combination of atlas-registration and recognition of local patterns. Using "leave one out cross validation" we show on a database of 16 MR-CT image pairs that our method reliably allows estimating the CT image from the MR image. Subsequently, as in PET/CT, the PET attenuation map can be predicted from the CT image. On an additional dataset of MR/CT/PET triplets we quantitatively validate that our approach allows PET quantification with an error that is smaller than what would be clinically significant. We demonstrate our approach on T1-weighted human brain scans. However, the presented methods are more general and current research focuses on applying the established methods to human whole body PET/MRI applications.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Training a Support Vector Machine in the Primal

Chapelle, O.

In Large Scale Kernel Machines, pages: 29-50, Neural Information Processing, (Editors: Bottou, L. , O. Chapelle, D. DeCoste, J. Weston), MIT Press, Cambridge, MA, USA, September 2007, This is a slightly updated version of the Neural Computation paper (inbook)

Abstract
Most literature on Support Vector Machines (SVMs) concentrate on the dual optimization problem. In this paper, we would like to point out that the primal problem can also be solved efficiently, both for linear and non-linear SVMs, and that there is no reason to ignore this possibility. On the contrary, from the primal point of view new families of algorithms for large scale SVM training can be investigated.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Approximation Methods for Gaussian Process Regression

Quiñonero-Candela, J., Rasmussen, CE., Williams, CKI.

In Large-Scale Kernel Machines, pages: 203-223, Neural Information Processing, (Editors: Bottou, L. , O. Chapelle, D. DeCoste, J. Weston), MIT Press, Cambridge, MA, USA, September 2007 (inbook)

Abstract
A wealth of computationally efficient approximation methods for Gaussian process regression have been recently proposed. We give a unifying overview of sparse approximations, following Quiñonero-Candela and Rasmussen (2005), and a brief review of approximate matrix-vector multiplication methods.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Density Estimation of Structured Outputs in Reproducing Kernel Hilbert Spaces

Altun, Y., Smola, A.

In Predicting Structured Data, pages: 283-300, Advances in neural information processing systems, (Editors: BakIr, G. H., T. Hofmann, B. Schölkopf, A. J. Smola, B. Taskar, S. V.N. Vishwanathan), MIT Press, Cambridge, MA, USA, September 2007 (inbook)

Abstract
In this paper we study the problem of estimating conditional probability distributions for structured output prediction tasks in Reproducing Kernel Hilbert Spaces. More specically, we prove decomposition results for undirected graphical models, give constructions for kernels, and show connections to Gaussian Process classi- cation. Finally we present ecient means of solving the optimization problem and apply this to label sequence learning. Experiments on named entity recognition and pitch accent prediction tasks demonstrate the competitiveness of our approach.

Web [BibTex]

Web [BibTex]


no image
Trading Convexity for Scalability

Collobert, R., Sinz, F., Weston, J., Bottou, L.

In Large Scale Kernel Machines, pages: 275-300, Neural Information Processing, (Editors: Bottou, L. , O. Chapelle, D. DeCoste, J. Weston), MIT Press, Cambridge, MA, USA, September 2007 (inbook)

Abstract
Convex learning algorithms, such as Support Vector Machines (SVMs), are often seen as highly desirable because they offer strong practical properties and are amenable to theoretical analysis. However, in this work we show how nonconvexity can provide scalability advantages over convexity. We show how concave-convex programming can be applied to produce (i) faster SVMs where training errors are no longer support vectors, and (ii) much faster Transductive SVMs.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Bayesian methods for NMR structure determination

Habeck, M.

29th Annual Discussion Meeting: Magnetic Resonance in Biophysical Chemistry, September 2007 (talk)

Web [BibTex]

Web [BibTex]


no image
Classifying Event-Related Desynchronization in EEG, ECoG and MEG signals

Hill, N., Lal, T., Tangermann, M., Hinterberger, T., Widman, G., Elger, C., Schölkopf, B., Birbaumer, N.

In Toward Brain-Computer Interfacing, pages: 235-260, Neural Information Processing, (Editors: G Dornhege and J del R Millán and T Hinterberger and DJ McFarland and K-R Müller), MIT Press, Cambridge, MA, USA, September 2007 (inbook)

PDF Web [BibTex]

PDF Web [BibTex]


no image
Joint Kernel Maps

Weston, J., Bakir, G., Bousquet, O., Mann, T., Noble, W., Schölkopf, B.

In Predicting Structured Data, pages: 67-84, Advances in neural information processing systems, (Editors: GH Bakir and T Hofmann and B Schölkopf and AJ Smola and B Taskar and SVN Vishwanathan), MIT Press, Cambridge, MA, USA, September 2007 (inbook)

Web [BibTex]

Web [BibTex]


no image
Brain-Computer Interfaces for Communication in Paralysis: A Clinical Experimental Approach

Hinterberger, T., Nijboer, F., Kübler, A., Matuz, T., Furdea, A., Mochty, U., Jordan, M., Lal, T., Hill, J., Mellinger, J., Bensch, M., Tangermann, M., Widman, G., Elger, C., Rosenstiel, W., Schölkopf, B., Birbaumer, N.

In Toward Brain-Computer Interfacing, pages: 43-64, Neural Information Processing, (Editors: G. Dornhege and J del R Millán and T Hinterberger and DJ McFarland and K-R Müller), MIT Press, Cambridge, MA, USA, September 2007 (inbook)

PDF Web [BibTex]

PDF Web [BibTex]


no image
Thinking Out Loud: Research and Development of Brain Computer Interfaces

Hill, NJ.

Invited keynote talk at the Max Planck Society‘s PhDNet Workshop., July 2007 (talk)

Abstract
My principal interest is in applying machine-learning methods to the development of Brain-Computer Interfaces (BCI). This involves the classification of a user‘s intentions or mental states, or regression against some continuous intentional control signal, using brain signals obtained for example by EEG, ECoG or MEG. The long-term aim is to develop systems that a completely paralysed person (such as someone suffering from advanced Amyotrophic Lateral Sclerosis) could use to communicate. Such systems have the potential to improve the lives of many people who would be otherwise completely unable to communicate, but they are still very much in the research and development stages.

PDF [BibTex]

PDF [BibTex]