My scientific interests are in the field of machine learning and inference from empirical data. In particular, I study kernel methods for extracting regularities from possibly high-dimensional data. These regularities are usually statistical ones, however, in recent years I have also become interested in methods for finding causal structures that underly statistical dependences. I have worked on a number of different applications of machine learning - in our field, you get "to play in everyone's backyard." Most recently, I have been trying to play in the backyard of astronomers and photographers.
With the growing interest in (how to make money with) big data, machine learning has significantly gained in popularity. We have published an article in the German newspaper FAZ, discussing some of the implications. Disclaimer: the text that appears above our names was neither written nor approved by us.
M.Sc. in mathematics and Lionel Cooper Memorial Prize, University of London (1992)
Diplom in physics (Tübingen, 1994)
doctorate in computer science from the Technical University Berlin (1997); thesis on Support Vector Learning (main advisor: V. Vapnik, AT&T Bell Labs) won the annual dissertation prize of the German Association for Computer Science (GI)
If you'd like to contact me, please consider these two notes:
1. I recently became co-editor-in-chief of JMLR. I work for JMLR because I believe in its open access model, but it takes a lot of time. During my JMLR term, please don't convince me to do other journal or grant reviewing duties.
2. I am not very organized with my e-mail so if you want to apply for a position in my lab, please send your application only to Sekretariat-Schoelkopf@tuebingen.mpg.de. Note that we do not respond to non-personalized applications that look like they are being sent to a large number of places simultaneously.
We are always happy to receive outstanding applications for PhD positions and postdocs.
In Advances in neural information processing systems 21, pages: 665-672, (Editors: D Koller and D Schuurmans and Y Bengio and L Bottou), Curran, Red Hook, NY, USA, 22nd Annual Conference on Neural Information Processing Systems (NIPS), June 2009 (inproceedings)
From an information-theoretic perspective, a noisy transmission system such as a visual Brain-Computer Interface (BCI) speller could benefit from the use of errorcorrecting codes. However, optimizing the code solely according to the maximal minimum-Hamming-distance criterion tends to lead to an overall increase in target frequency of target stimuli, and hence a significantly reduced average
target-to-target interval (TTI), leading to difficulties in classifying the individual event-related potentials (ERPs) due to overlap and refractory effects. Clearly any change to the stimulus setup must also respect the possible psychophysiological consequences. Here we report new EEG data from experiments in which we explore stimulus types and codebooks in a within-subject design, finding an interaction between the two factors. Our data demonstrate that the traditional, rowcolumn code has particular spatial properties that lead to better performance than one would expect from its TTIs and Hamming-distances alone, but nonetheless error-correcting codes can improve performance provided the right stimulus type is used.
In Proceedings of the 26th International Conference on Machine Learning, pages: 801-808, (Editors: A Danyluk and L Bottou and ML Littman), ACM Press, New York, NY, USA, ICML, June 2009 (inproceedings)
We propose a method that detects the true
direction of time series, by fitting an autoregressive
moving average model to the data.
Whenever the noise is independent of the previous
samples for one ordering of the observations,
but dependent for the opposite ordering,
we infer the former direction to be the
true one. We prove that our method works
in the population case as long as the noise of
the process is not normally distributed (for
the latter case, the direction is not identificable).
A new and important implication of
our result is that it confirms a fundamental
conjecture in causal reasoning - if after regression
the noise is independent of signal for
one direction and dependent for the other,
then the former represents the true causal
direction - in the case of time series. We
test our approach on two types of data: simulated
data sets conforming to our modeling
assumptions, and real world EEG time series.
Our method makes a decision for a significant
fraction of both data sets, and these
decisions are mostly correct. For real world
data, our approach outperforms alternative
solutions to the problem of time direction recovery.
In Advances in neural information processing systems 21, pages: 689-696, (Editors: D Koller and D Schuurmans and Y Bengio and L Bottou), Curran, Red Hook, NY, USA, 22nd Annual Conference on Neural Information Processing Systems (NIPS), June 2009 (inproceedings)
The discovery of causal relationships between a set of observed variables is a fundamental problem in science. For continuous-valued data linear acyclic causal models are often used because these models are well understood and there are well-known methods to fit them to data. In reality, of course, many causal relationships are more or less nonlinear, raising some doubts as to the applicability and usefulness of purely linear methods. In this contribution we show that in fact the basic linear framework can be generalized to nonlinear models with additive noise. In this extended framework, nonlinearities in the data-generating process are in fact a blessing rather than a curse, as they typically provide information on the underlying causal system and allow more aspects of the true data-generating mechanisms to be identified. In addition to theoretical results we show simulations and some simple real data experiments illustrating the identification power provided by nonlinearities.
In Advances in neural information processing systems 21, pages: 1433-1440, (Editors: D Koller and D Schuurmans and Y Bengio and L Bottou), Curran, Red Hook, NY, USA, 22nd Annual Conference on Neural Information Processing Systems (NIPS), June 2009 (inproceedings)
We study the problem of domain transfer for a supervised classification task in mRNA splicing. We consider a number of recent domain transfer methods from machine learning, including some that are novel, and evaluate them on genomic
sequence data from model organisms of varying evolutionary distance. We find that in cases where the organisms are not closely related, the use of domain adaptation methods can help improve classification performance.
In Advances in neural information processing systems 21, pages: 1713-1720, (Editors: D Koller and D Schuurmans and Y Bengio and L Bottou), Curran, Red Hook, NY, USA, 22nd Annual Conference on Neural Information Processing Systems (NIPS), June 2009 (inproceedings)
This paper introduces a new approach to constructing meaningful lower dimensional representations of sets of data points. We argue that constraining the mapping between the high and low dimensional spaces to be a diffeomorphism is a natural way of ensuring that pairwise distances are approximately preserved. Accordingly we develop an algorithm which diffeomorphically maps the data near to a lower dimensional subspace and then projects onto that subspace. The problem of solving for the mapping is transformed into one of solving for an Eulerian flow field which we compute using ideas from kernel methods. We demonstrate the efficacy of our approach on various real world data sets.
Walder, C., Schölkopf, B.Diffeomorphic Dimensionality Reduction
In Advances in neural information processing systems 21, pages: 1713-1720, (Editors: D Koller and D Schuurmans and Y Bengio and L Bottou), Curran, Red Hook, NY, USA, 22nd Annual Conference on Neural Information Processing Systems (NIPS), June 2009 (inproceedings)
In Proceedings of the 25th Conference on Uncertainty in Artificial Intelligence, pages: 249-257, (Editors: J Bilmes and AY Ng), AUAI Press, Corvallis, OR, USA, UAI, June 2009 (inproceedings)
We propose a method for inferring the existence
of a latent common cause ("confounder")
of two observed random variables.
The method assumes that the two effects of
the confounder are (possibly nonlinear) functions
of the confounder plus independent, additive
noise. We discuss under which conditions
the model is identifiable (up to an arbitrary
reparameterization of the confounder)
from the joint distribution of the effects. We
state and prove a theoretical result that provides
evidence for the conjecture that the
model is generically identifiable under suitable
technical conditions. In addition, we
propose a practical method to estimate the
confounder from a finite i.i.d. sample of the
effects and illustrate that the method works
well on both simulated and real-world data.
In Proceedings of the 26th International Conference on Machine Learning, pages: 745-752, (Editors: A Danyluk and L Bottou and M Littman), ACM Press, New York, NY, USA, ICML, June 2009 (inproceedings)
Motivated by causal inference problems, we
propose a novel method for regression that
minimizes the statistical dependence between
regressors and residuals. The key advantage
of this approach to regression is that it does
not assume a particular distribution of the
noise, i.e., it is non-parametric with respect
to the noise distribution. We argue that the
proposed regression method is well suited to
the task of causal inference in additive noise
models. A practical disadvantage is that the
resulting optimization problem is generally
non-convex and can be difficult to solve. Nevertheless,
we report good results on one of the
tasks of the NIPS 2008 Causality Challenge,
where the goal is to distinguish causes from
effects in pairs of statistically dependent variables.
In addition, we propose an algorithm
for efficiently inferring causal models from
observational data for more than two variables.
The required number of regressions
and independence tests is quadratic in the
number of variables, which is a significant improvement
over the simple method that tests
all possible DAGs.
In Advances in neural information processing systems 21, pages: 1441-1448, (Editors: D Koller and D Schuurmans and Y Bengio and L Bottou), Curran, Red Hook, NY, USA, 22nd Annual Conference on Neural Information Processing Systems (NIPS), June 2009 (inproceedings)
We show how improved sequences for magnetic resonance imaging can be found through automated optimization of Bayesian design scores. Combining recent advances in approximate Bayesian inference and natural image statistics with high-performance numerical computation, we propose the first scalable Bayesian experimental design framework for this problem of high relevance to clinical and brain research. Our solution requires approximate inference for dense, non-Gaussian models on a scale seldom addressed before. We propose a novel scalable variational inference algorithm, and show how powerful methods of numerical mathematics can be modified to compute primitives in our framework. Our approach is evaluated on a realistic setup with raw data from a 3T MR scanner.
In Advances in neural information processing systems 21, pages: 473-480, (Editors: D Koller and D Schuurmans and Y Bengio and L Bottou), Curran, Red Hook, NY, USA, 22nd Annual Conference on Neural Information Processing Systems (NIPS), June 2009 (inproceedings)
Embeddings of random variables in reproducing kernel Hilbert spaces (RKHSs) may be used to conduct statistical inference based on higher order moments. For sufficiently rich (characteristic) RKHSs, each probability distribution has a unique embedding, allowing all statistical properties of the distribution to be taken into consideration. Necessary and sufficient conditions for an RKHS to be characteristic exist for Rn. In the present work, conditions are established for an RKHS to be characteristic on groups and semigroups. Illustrative examples are provided, including characteristic kernels on periodic domains, rotation matrices, and Rn+.
In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages: 186-193, IEEE Service Center, Piscataway, NJ, USA, CVPR, June 2009 (inproceedings)
Multi-modal image registration is a challenging problem
in medical imaging. The goal is to align anatomically
identical structures; however, their appearance in images
acquired with different imaging devices, such as CT
or MR, may be very different. Registration algorithms generally
deform one image, the floating image, such that it
matches with a second, the reference image, by maximizing
some similarity score between the deformed and the reference
image. Instead of using a universal, but a priori fixed
similarity criterion such as mutual information, we propose
learning a similarity measure in a discriminative manner
such that the reference and correctly deformed floating
images receive high similarity scores. To this end, we
develop an algorithm derived from max-margin structured
output learning, and employ the learned similarity measure
within a standard rigid registration algorithm. Compared
to other approaches, our method adapts to the specific registration
problem at hand and exploits correlations between
neighboring pixels in the reference and the floating image.
Empirical evaluation on CT-MR/PET-MR rigid registration
tasks demonstrates that our approach yields robust performance
and outperforms the state of the art methods for
multi-modal medical image registration.
In Proceedings of the 2nd Workshop on Data Mining using Matrices and Tensors (DMMT 2009), pages: 32-41, (Editors: C Ding and T Li), ACM Press, New York, NY, USA, 2nd Workshop on Data Mining using Matrices and Tensors (DMMT/KDD), June 2009 (inproceedings)
The analysis of n-ary relations receives attention in many different fields, for instance biology, web mining, and social studies. In the basic setting, there are n sets of instances, and each observation associates n instances, one from each set. A common approach to explore these n-way data is the search for n-set patterns. An n-set pattern consists of specific subsets of the n instance sets such that all possible n- ary associations between the corresponding instances are observed. This provides a higher-level view of the data, revealing associative relationships between groups of instances. Here, we generalize this approach in two respects. First, we tolerate missing observations to a certain degree, that means we are also interested in n-sets where most (although not all) of the possible combinations have been recorded in the data. Second, we take association weights into account. More precisely, we propose a method to enumerate all n- sets that satisfy a minimum threshold with respect to the average association weight. Non-observed associations obtain by default a weight of zero. Technically, we solve the enumeration task using a reverse search strategy, which allows for effective pruning of the search space. In addition, our algorithm provides a ranking of the solutions and can consider further constraints. We show experimental results on artificial and real-world data sets from different domains.
Journal of Vision, 9(5:7):1-15, May 2009 (article)
The human visual system is foveated, that is, outside the central visual field resolution and acuity drop rapidly. Nonetheless much of a visual scene is perceived after only a few saccadic eye movements, suggesting an effective strategy for selecting saccade targets. It has been known for some time that local image structure at saccade targets influences the selection process. However, the question of what the most relevant visual features are is still under debate. Here we show that center-surround patterns emerge as the optimal solution for predicting saccade targets from their local image structure. The resulting model, a one-layer feed-forward network, is surprisingly simple compared to previously suggested models which assume much more complex computations such as multi-scale processing and multiple feature channels. Nevertheless, our model is equally predictive. Furthermore, our findings are consistent with neurophysiological hardware in the superior colliculus. Bottom-up visual saliency may thus not be computed cortically as has been thought previously.
In Proceedings of the First IEEE International Conference on Computational Photography (ICCP 2009), pages: 1-9, IEEE, Piscataway, NJ, USA, First IEEE International Conference on Computational Photography (ICCP), April 2009 (inproceedings)
Photographs taken with long exposure or high ISO setting may contain substantial amounts of noise, drastically reducing the Signal-To-Noise Ratio (SNR). This paper presents a novel optimization approach for denoising. It is based on a library of dark frames previously taken under varying conditions of temperature, ISO setting and exposure time, and a quality measure or prior for the class of images to denoise. The method automatically computes a synthetic dark frame that, when subtracted from an image, optimizes the quality measure. For specific choices of the quality measure, the denoising problem reduces to a quadratic programming (QP) problem that can be solved efficiently. We show experimentally that it is sufficient to consider a limited subsample of pixels when evaluating the quality measure in the optimization, in which case the complexity of the procedure does not depend on the size of the images but only on the number of dark frames. We provide quantitative experimental results showing that our method automatically computes dark frames that are competitive with those taken under idealized conditions (controlled temperature, ISO setting, exposure time, and averaging of multiple exposures). We provide application examples in astronomical image denoising. The method is validated on two CMOS SLRs.
17(2627), 17th Annual Meeting of the International Society for Magnetic Resonance in Medicine (ISMRM), April 2009 (poster)
MR image reconstruction from undersampled k-space can be improved by nonlinear denoising estimators since they incorporate statistical prior knowledge about image sparsity. Reconstruction quality depends crucially on the undersampling design (k-space trajectory), in a manner complicated by the nonlinear and signal-dependent characteristics of these methods. We propose an algorithm to assess and optimize k-space trajectories for sparse MRI reconstruction, based on Bayesian experimental design, which is scaled up to full MR images by a novel variational relaxation to iteratively reweighted FFT or gridding computations. Designs are built sequentially by adding phase encodes predicted to be most informative, given the combination of previous measurements with image prior information.
Journal of Neural Engineering, 6(2):1-9, April 2009 (article)
We reveal the presence of refractory and overlap effects in the event-related potentials in visual P300 speller datasets, and we show their negative impact on the performance of the system. This finding has important implications for how to encode the letters that can be selected for communication. However, we show that such effects are dependent on stimulus parameters: an alternative stimulus type based on apparent motion suffers less from the refractory effects and leads to an improved letter prediction performance.
17(260), 17th Annual Meeting of the International Society for Magnetic Resonance in Medicine (ISMRM), April 2009 (poster)
There has recently been a growing interest in combining PET and MR. Attenuation correction (AC), which accounts for radiation attenuation properties of the tissue, is mandatory for quantitative PET. In the case of PET/MR the attenuation map needs to be determined from the MR image. This is intrinsically difficult as MR intensities are not related to the electron density information of the attenuation map. Using ultra-short echo (UTE) acquisition, atlas registration and machine learning, we present methods that allow prediction of the attenuation map based on the MR image both for brain and whole body imaging.
In Proceedings of the First IEEE International Conference Computational Photography (ICCP 2009), pages: 1-7, IEEE, Piscataway, NJ, USA, First IEEE International Conference on Computational Photography (ICCP), April 2009 (inproceedings)
Atmospheric turbulences blur astronomical images taken by earth-based telescopes. Taking many short-time exposures in such a situation provides noisy images of the same object, where each noisy image has a different blur. Commonly astronomers apply a technique called “Lucky Imaging” that selects a few of the recorded frames that fulfill certain criteria, such as reaching a certain peak intensity (“Strehl ratio”). The selected frames are then averaged to obtain a better image. In this paper we introduce and analyze a new method that exploits all the frames and generates an improved image in an online fashion. Our initial experiments with controlled artificial data and real-world astronomical datasets yields promising results.
Expert Systems with Applications, 36(2):3284-3292, March 2009 (article)
In bioinformatics, there exist multiple descriptions of graphs for the same set of genes or proteins. For instance, in yeast systems, graph edges can represent different relationships such as proteinprotein interactions, genetic interactions, or co-participation in a protein complex, etc. Relying on similarities between nodes, each graph can be used independently for prediction of protein function. However, since different graphs contain partly independent and partly complementary information about the problem at hand, one can enhance the total information extracted by combining all graphs. In this paper, we propose a method for integrating multiple graphs within a framework of semi-supervised learning. The method alternates between minimizing the objective function with respect to network output and with respect to combining weights. We apply the method to the task of protein functional class prediction in yeast. The proposed method performs significantly better than the same algorithm trained on any singl
European Journal of Nuclear Medicine and Molecular Imaging, 36(Supplement 1):93-104, March 2009 (article)
Introduction Positron emission tomography (PET) is a fully quantitative technology for imaging metabolic pathways and dynamic processes in vivo. Attenuation correction of raw PET data is a prerequisite for quantification and is typically based on separate transmission measurements. In PET/CT attenuation correction, however, is performed routinely based on the available CT transmission data.
Objective Recently, combined PET/magnetic resonance (MR) has been proposed as a viable alternative to PET/CT. Current concepts of PET/MRI do not include CT-like transmission sources and, therefore, alternative methods of PET attenuation correction must be found. This article reviews existing approaches to MR-based attenuation correction (MR-AC). Most groups have proposed MR-AC algorithms for brain PET studies and more recently also for torso PET/MR imaging. Most MR-AC strategies require the use of complementary MR and transmission images, or morphology templates generated from transmission images. We review and discuss these algorithms and point out challenges for using MR-AC in clinical routine.
Discussion MR-AC is work-in-progress with potentially promising results from a template-based approach applicable to both brain and torso imaging. While efforts are ongoing in making clinically viable MR-AC fully automatic, further studies are required to realize the potential benefits of MR-based motion compensation and partial volume correction of the PET data.
Neural Computation, 21(1):272-300, January 2009 (article)
We shed light on the discrimination between patterns belonging to two different classes by casting this decoding problem into a generalized prototype framework. The discrimination process is then separated into two stages: a projection stage that reduces the dimensionality of the data by projecting it on a line and a threshold stage where the distributions of the projected patterns of both classes are separated. For this, we extend the popular mean-of-class prototype classification using algorithms from machine learning that satisfy a set of invariance properties. We report a simple yet general approach to express different types of linear classification algorithms in an identical and easy-to-visualize formal framework using generalized prototypes where these prototypes are used to express the normal vector and offset of the hyperplane. We investigate nonmargin classifiers such as the classical prototype classifier, the Fisher classifier, and the relevance vector machine. We then study hard and soft margin cl
assifiers such as the support vector machine and a boosted version of the prototype classifier. Subsequently, we relate mean-of-class prototype classification to other classification algorithms by showing that the prototype classifier is a limit of any soft margin classifier and that boosting a prototype classifier yields the support vector machine. While giving novel insights into classification per se by presenting a common and unified formalism, our generalized prototype framework also provides an efficient visualization and a principled comparison of machine learning classification.
In Dataset Shift in Machine Learning, pages: 131-160, (Editors: Quiñonero-Candela, J., Sugiyama, M., Schwaighofer, A. and Lawrence, N. D.), MIT Press, Cambridge, MA, USA, 2009 (inbook)
Given sets of observations of training and test data, we consider the problem of re-weighting the training data such that its distribution more closely matches that of the test data. We achieve this goal by matching covariate distributions between training and test sets in a high dimensional feature space (specifically, a reproducing
kernel Hilbert space). This approach does not require distribution estimation. Instead, the sample weights are obtained by a simple quadratic programming procedure. We provide a uniform convergence bound on the distance between
the reweighted training feature mean and the test feature mean, a transductive bound on the expected loss of an algorithm trained on the reweighted data, and a connection to single class SVMs. While our method is designed to deal with the case of simple covariate shift (in the sense of Chapter ??), we have also found benefits for sample selection bias on the labels. Our correction procedure yields its greatest and most consistent advantages when the learning algorithm returns a classifier/regressor that is simpler" than the data might suggest.
In Advances in Neural Information Processing Systems 22, pages: 1750-1758, (Editors: Y Bengio and D Schuurmans and J Lafferty and C Williams and A Culotta), Curran, Red Hook, NY, USA, 23rd Annual Conference on Neural Information Processing Systems (NIPS), 2009 (inproceedings)
Embeddings of probability measures into reproducing kernel Hilbert spaces have been proposed as a straightforward and practical means of representing and comparing probabilities. In particular, the distance between embeddings (the maximum mean discrepancy, or MMD) has several key advantages over many classical metrics on distributions, namely easy computability, fast convergence and low bias of finite sample estimates. An important requirement of the embedding RKHS is that it be characteristic: in this case, the MMD between two distributions is zero if and only if the distributions coincide. Three new results on the MMD are introduced
in the present study. First, it is established that MMD corresponds to the optimal risk of a kernel classifier, thus forming a natural link between the distance between distributions and their ease of classification. An important consequence is that a kernel must be characteristic to guarantee classifiability between distributions in the RKHS. Second, the class of characteristic kernels is broadened to incorporate all strictly positive definite kernels: these include non-translation invariant kernels and kernels on non-compact domains. Third, a generalization of
the MMD is proposed for families of kernels, as the supremum over MMDs on a class of kernels (for instance the Gaussian kernels with different bandwidths). This extension is necessary to obtain a single distance measure if a large selection or class of characteristic kernels is potentially appropriate. This generalization is reasonable, given that it corresponds to the problem of learning the kernel by minimizing the risk of the corresponding kernel classifier. The generalized MMD is shown to have consistent finite sample estimates, and its performance is demonstrated
on a homogeneity testing example.
Journal of Vision, 9(8):article 32, 2009 (article)
For simple visual patterns under the experimenter's control we impose which information, or features, an observer can use to solve a given perceptual task. For natural vision tasks, however, there are typically a multitude of potential features in a given visual scene which the visual system may be exploiting when analyzing it: edges, corners, contours, etc. Here we describe a novel non-linear system identification technique based on modern machine learning methods that allows the critical features an observer uses to be inferred directly from the observer's data. The method neither requires stimuli to be embedded in noise nor is it limited to linear perceptive fields (classification images). We demonstrate our technique by deriving the critical image features observers fixate in natural scenes (bottom-up visual saliency). Unlike previous studies where the relevant structure is determined manuallyâ€”e.g. by selecting Gabors as visual filtersâ€”we do not make any assumptions in this regard, but numerically infer number and properties them from the eye-movement data. We show that center-surround patterns emerge as the optimal solution for predicting saccade targets from local image structure. The resulting model, a one-layer feed-forward network with contrast gain-control, is surprisingly simple compared to previously suggested saliency models. Nevertheless, our model is equally predictive. Furthermore, our findings are consistent with neurophysiological hardware in the superior colliculus. Bottom-up visual saliency may thus not be computed cortically as has been thought previously.
In Kernel Methods for Remote Sensing Data Analysis, pages: 25-48, 2, (Editors: Gustavo Camps-Valls and Lorenzo Bruzzone), Wiley, New York, NY, USA, 2009 (inbook)
Kernel learning algorithms are currently becoming a standard tool in the area of machine learning and pattern recognition.
In this chapter we review the fundamental theory of kernel learning. As the basic building block we introduce the kernel function,
which provides an elegant and general way to compare possibly very complex objects. We then review the concept
of a reproducing kernel Hilbert space and state the representer theorem. Finally we give an overview of the most
prominent algorithms, which are support vector classification and regression, Gaussian Processes and kernel principal analysis.
With multiple kernel learning and structured output prediction we also introduce some more recent advancements in the field.
Pattern Recognition, 41(11):3271-3286, November 2008 (article)
Many common machine learning methods such as Support Vector Machines or Gaussian process
inference make use of positive definite kernels, reproducing kernel Hilbert spaces, Gaussian processes, and
regularization operators. In this work these objects are presented in a general, unifying framework, and
interrelations are highlighted.
With this in mind we then show how linear stochastic differential equation models can be incorporated
naturally into the kernel framework. And vice versa, many kernel machines can be interpreted in terms of
differential equations. We focus especially on ordinary differential equations, also known as dynamical
systems, and it is shown that standard kernel inference algorithms are equivalent to Kalman filter methods
based on such models.
In order not to cloud qualitative insights with heavy mathematical machinery, we restrict ourselves to finite
domains, implying that differential equations are treated via their corresponding finite difference equations.
(179), Max-Planck Institute for Biological Cybernetics, Tübingen, Germany, November 2008 (techreport)
We investigate an implicit method to compute a piecewise linear representation of a surface from a
set of sample points. As implicit surface functions we use the weighted sum of piecewise linear kernel functions.
For such a function we can partition Rd in such a way that these functions are linear on the subsets of the partition.
For each subset in the partition we can then compute the zero level set of the function exactly as the intersection of
a hyperplane with the subset.
In Computer Vision - ECCV 2008, Lecture Notes in Computer Science, Vol. 5304, pages: 126-139, (Editors: DA Forsyth and PHS Torr and A Zisserman), Springer, Berlin, Germany, 10th European Conference on Computer Vision, October 2008 (inproceedings)
We aim to color automatically greyscale images, without any manual intervention. The color proposition could then be interactively corrected by user-provided color landmarks if necessary. Automatic colorization is nontrivial since there is usually no one-to-one correspondence between color and local texture. The contribution of our framework is that we deal directly with multimodality and estimate, for each pixel of the image to be colored, the probability distribution of all possible colors,
instead of choosing the most probable color at the local level. We also predict the expected variation of color at each pixel, thus defining a nonuniform
spatial coherency criterion. We then use graph cuts to maximize the probability of the whole colored image at the global level. We work in the L-a-b color space in order to approximate the human perception of distances between colors, and we use machine learning tools to extract as much information as possible from a dataset of colored examples. The resulting algorithm is fast, designed to be more robust to texture noise, and is above all able to deal with ambiguity, in contrary to previous approaches.
Journal of Nuclear Medicine, 49(11):1875-1883, October 2008 (article)
For quantitative PET information, correction of tissue photon attenuation is mandatory. Generally in conventional PET, the attenuation map is obtained from a transmission scan, which uses a rotating radionuclide source, or from the CT scan in a combined PET/CT scanner. In the case of PET/MRI scanners currently under development, insufficient space for the rotating source exists; the attenuation map can be calculated from the MR image instead. This task is challenging because MR intensities correlate with proton densities and tissue-relaxation properties, rather than with attenuation-related mass density. METHODS: We used a combination of local pattern recognition and atlas registration, which captures global variation of anatomy, to predict pseudo-CT images from a given MR image. These pseudo-CT images were then used for attenuation correction, as the process would be performed in a PET/CT scanner. RESULTS: For human brain scans, we show on a database of 17 MR/CT image pairs that our method reliably enables e
stimation of a pseudo-CT image from the MR image alone. On additional datasets of MRI/PET/CT triplets of human brain scans, we compare MRI-based attenuation correction with CT-based correction. Our approach enables PET quantification with a mean error of 3.2% for predefined regions of interest, which we found to be clinically not significant. However, our method is not specific to brain imaging, and we show promising initial results on 1 whole-body animal dataset. CONCLUSION: This method allows reliable MRI-based attenuation correction for human brain scans. Further work is necessary to validate the method for whole-body imaging.
In FG 2008, pages: 1-8, IEEE Computer Society, Los Alamitos, CA, USA, 8th IEEE International Conference on Automatic Face and Gesture Recognition, September 2008 (inproceedings)
This paper presents a fully automated algorithm for reconstructing
a textured 3D model of a face from a single
photograph or a raw video stream. The algorithm is based
on a combination of Support Vector Machines (SVMs) and
a Morphable Model of 3D faces. After SVM face detection,
individual facial features are detected using a novel
regression- and classification-based approach, and probabilistically
plausible configurations of features are selected
to produce a list of candidates for several facial feature positions.
In the next step, the configurations of feature points
are evaluated using a novel criterion that is based on a
Morphable Model and a combination of linear projections.
To make the algorithm robust with respect to head orientation,
this process is iterated while the estimate of pose is
refined. Finally, the feature points initialize a model-fitting
procedure of the Morphable Model. The result is a highresolution
3D surface model.
In Advances in neural information processing systems 20, pages: 489-496, (Editors: JC Platt and D Koller and Y Singer and S Roweis), Curran, Red Hook, NY, USA, 21st Annual Conference on Neural Information Processing Systems (NIPS), September 2008 (inproceedings)
We propose a new measure of conditional dependence of random variables, based on normalized cross-covariance operators on reproducing kernel Hilbert spaces. Unlike previous kernel dependence measures, the proposed criterion does not depend on the choice of kernel in the limit of infinite data, for a wide class of kernels. At the same time, it has a straightforward empirical estimate with good convergence behaviour. We discuss the theoretical properties of the measure, and demonstrate its application in experiments.
In Advances in neural information processing systems 20, pages: 1369-1376, (Editors: JC Platt and D Koller and Y Singer and S Roweis), Curran, Red Hook, NY, USA, 21st Annual Conference on Neural Information Processing Systems (NIPS), September 2008 (inproceedings)
We study a pattern classification algorithm which has recently been proposed by Vapnik and coworkers. It builds on a new inductive principle which assumes that in addition to positive and negative data, a third class of data is available, termed the Universum. We assay the behavior of the algorithm by establishing links with Fisher discriminant analysis and oriented PCA, as well as with an SVM in a
projected subspace (or, equivalently, with a data-dependent reduced kernel). We also provide experimental results.
Journal of Mathematical Psychology, 52(5):297-303, September 2008 (article)
Similarity is used as an explanatory construct throughout psychology and multidimensional scaling (MDS) is the most popular way to assess similarity. In MDS, similarity is intimately connected to the idea of a geometric representation of stimuli in a perceptual space. Whilst connecting similarity and closeness of stimuli in a geometric representation may be intuitively plausible, Tversky and Gati [Tversky, A., Gati, I. (1982). Similarity, separability, and the triangle inequality. Psychological Review, 89(2), 123154] have reported data which are inconsistent with the usual geometric representations that are based on segmental additivity. We show that similarity measures based on Shepards universal law of generalization [Shepard, R. N. (1987). Toward a universal law of generalization for psychologica science. Science, 237(4820), 13171323] lead to an inner product representation in a reproducing kernel Hilbert space. In such a space stimuli are represented by their similarity to all other stimuli. This representation, based on Shepards law, has a natural metric that does not have additive segments whilst still retaining the intuitive notion of connecting similarity and distance between stimuli. Furthermore, this representation has the psychologically appealing property that the distance between stimuli is bounded.
In Advances in neural information processing systems 20, pages: 585-592, (Editors: JC Platt and D Koller and Y Singer and S Roweis), Curran, Red Hook, NY, USA, 21st Annual Conference on Neural Information Processing Systems (NIPS), September 2008 (inproceedings)
Whereas kernel measures of independence have been widely applied in machine learning (notably in kernel ICA), there is as yet no method to determine whether they have detected statistically significant dependence. We provide a novel test of the independence hypothesis for one particular kernel independence measure, the Hilbert-Schmidt independence criterion (HSIC). The resulting test costs O(m^2), where m is the sample size. We demonstrate that this test outperforms established contingency table-based tests. Finally, we show the HSIC test also applies to text (and to structured data more generally), for which no other independence test presently exists.
Hinterberger, T., Widmann, G., Lal, T., Hill, J., Tangermann, M., Rosenstiel, W., Schölkopf, B., Elger, C., Birbaumer, N.
Epilepsy and Behavior, 13(2):300-306, August 2008 (article)
Braincomputer interfaces (BCIs) can be used for communication in writing without muscular activity or for learning to control seizures by voluntary regulation of brain signals such as the electroencephalogram (EEG). Three of five patients with epilepsy were able to spell their names with electrocorticogram (ECoG) signals derived from motor-related areas within only one or two training sessions. Imagery of finger or tongue movements was classified with support-vector classification of autoregressive coefficients derived from the ECoG signals. After training of the classifier, binary classification responses were used to select letters from a computer-generated menu. Offline analysis showed increased theta activity in the unsuccessful patients, whereas the successful patients exhibited dominant sensorimotor rhythms that they could control. The high spatial resolution and increased signal-to-noise ratio in ECoG signals, combined with short training periods, may offer an alternative for communication in complete paralysis, locked-in syndrome, and motor restoration.
Rätsch, G., Clark, R., Schweikert, G., Toomajian, C., Ossowski, S., Zeller, G., Shinn, P., Warthman, N., Hu, T., Fu, G., Hinds, D., Cheng, H., Frazer, K., Huson, D., Schölkopf, B., Nordborg, M., Ecker, J., Weigel, D., Schneeberger, K., Bohlen, A.
16th Annual International Conference Intelligent Systems for Molecular Biology (ISMB), July 2008 (talk)
Laubinger, S., Zeller, G., Henz, S., Sachsenberg, T., Widmer, C., Naouar, N., Vuylsteke, M., Schölkopf, B., Rätsch, G., Weigel, D.
Genome Biology, 9(7: R112):1-16, July 2008 (article)
Gene expression maps for model organisms, including Arabidopsis thaliana, have typically been created using gene-centric expression arrays. Here, we describe a comprehensive expression atlas, Arabidopsis thaliana Tiling Array Express (At-TAX), which is based on whole-genome tiling arrays. We demonstrate that tiling arrays are accurate tools for gene expression analysis and identified more than 1,000 unannotated transcribed regions. Visualizations of gene expression estimates, transcribed regions, and tiling probe measurements are accessible online at the At-TAX homepage.
In Proceedings of the 25th International Conference onMachine Learning, pages: 992-999, (Editors: WW Cohen and A McCallum and S Roweis), ACM Press, New York, NY, USA, ICML, July 2008 (inproceedings)
Moment matching is a popular means of parametric
density estimation. We extend this technique
to nonparametric estimation of mixture
models. Our approach works by embedding
distributions into a reproducing kernel Hilbert
space, and performing moment matching in that
space. This allows us to tailor density estimators
to a function class of interest (i.e., for which
we would like to compute expectations). We
show our density estimation approach is useful
in applications such as message compression in
graphical models, and image classification and
In Proceedings of the 21st Annual Conference on Learning Theory, pages: 111-122, (Editors: RA Servedio and T Zhang), Omnipress, Madison, WI, USA, 21st Annual Conference on Learning Theory (COLT), July 2008 (inproceedings)
A Hilbert space embedding for probability measures
has recently been proposed, with applications
including dimensionality reduction, homogeneity
testing and independence testing. This embedding
represents any probability measure as a mean element
in a reproducing kernel Hilbert space (RKHS).
The embedding function has been proven to be injective
when the reproducing kernel is universal.
In this case, the embedding induces a metric on the
space of probability distributions defined on compact
In the present work, we consider more broadly the
problem of specifying characteristic kernels, defined
as kernels for which the RKHS embedding
of probability measures is injective. In particular,
characteristic kernels can include non-universal kernels.
We restrict ourselves to translation-invariant
kernels on Euclidean space, and define the associated
metric on probability measures in terms of
the Fourier spectrum of the kernel and characteristic
functions of these measures. The support of the
kernel spectrum is important in finding whether a
kernel is characteristic: in particular, the embedding
is injective if and only if the kernel spectrum
has the entire domain as its support. Characteristic
kernels may nonetheless have difficulty in distinguishing
certain distributions on the basis of finite
samples, again due to the interaction of the kernel
spectrum and the characteristic functions of the
In Proceedings of the 25th International Conference on Machine Learning, pages: 1112-1119, (Editors: WW Cohen and A McCallum and S Roweis), ACM Press, New York, NY, USA, ICML, July 2008 (inproceedings)
Most existing sparse Gaussian process (g.p.)
models seek computational advantages by
basing their computations on a set of m basis
functions that are the covariance function of
the g.p. with one of its two inputs fixed. We
generalise this for the case of Gaussian covariance
function, by basing our computations on
m Gaussian basis functions with arbitrary diagonal
covariance matrices (or length scales).
For a fixed number of basis functions and
any given criteria, this additional flexibility
permits approximations no worse and typically
better than was previously possible.
We perform gradient based optimisation of
the marginal likelihood, which costs O(m2n)
time where n is the number of data points,
and compare the method to various other
sparse g.p. methods. Although we focus on
g.p. regression, the central idea is applicable
to all kernel based algorithms, and we also
provide some results for the support vector
machine (s.v.m.) and kernel ridge regression
(k.r.r.). Our approach outperforms the other
methods, particularly for the case of very few
basis functions, i.e. a very high sparsity ratio.
Journal of Vision, 8(6):635, 8th Annual Meeting of the Vision Sciences Society (VSS), June 2008 (poster)
Humans perceives the world by directing the center of gaze from one location to another via rapid eye movements, called saccades. In the period between saccades the direction of gaze is held fixed for a few hundred milliseconds (fixations). It is primarily during fixations that information enters the visual system. Remarkably, however, after only a few fixations we perceive a coherent, high-resolution scene despite the visual acuity of the eye quickly decreasing away from the center of gaze: This suggests an effective strategy for selecting saccade targets.
Top-down effects, such as the observer's task, thoughts, or intentions have an effect on saccadic selection. Equally well known is that bottom-up effects-local image structure-influence saccade targeting regardless of top-down effects. However, the question of what the most salient visual features are is still under debate. Here we model the relationship between spatial intensity patterns in natural images and the response of the saccadic system using tools from machine learning. This allows us to identify the most salient image patterns that guide the bottom-up component of the saccadic selection system, which we refer to as perceptive fields. We show that center-surround patterns emerge as the optimal solution to the problem of predicting saccade targets. Using a novel nonlinear system identification technique we reduce our learned classifier to a one-layer feed-forward network which is surprisingly simple compared to previously suggested models assuming more complex computations such as multi-scale processing, oriented filters and lateral inhibition. Nevertheless, our model is equally predictive and generalizes better to novel image sets. Furthermore, our findings are consistent with neurophysiological hardware in the superior colliculus. Bottom-up visual saliency may thus not be computed cortically as has been thought previously.
Annals of Statistics, 36(3):1171-1220, June 2008 (article)
We review machine learning methods employing positive definite kernels. These methods formulate learning and estimation problems in a reproducing kernel Hilbert space (RKHS) of functions defined on the data domain, expanded in terms of a kernel. Working in linear spaces of function has the benefit of facilitating the construction and analysis of learning algorithms while at the same time allowing large classes of functions. The latter include nonlinear functions as well as functions defined on nonvectorial data.
Our goal is to understand the principles of Perception, Action and Learning in autonomous systems that successfully interact with complex environments and to use this understanding to design future systems