Header logo is ei


2017


no image
Improving performance of linear field generation with multi-coil setup by optimizing coils position

Aghaeifar, A., Loktyushin, A., Eschelbach, M., Scheffler, K.

Magnetic Resonance Materials in Physics, Biology and Medicine, 30(Supplement 1):S259, 34th Annual Scientific Meeting of the European Society for Magnetic Resonance in Medicine and Biology (ESMRMB), October 2017 (poster)

link (url) DOI [BibTex]

2017

link (url) DOI [BibTex]


no image
Estimating B0 inhomogeneities with projection FID navigator readouts

Loktyushin, A., Ehses, P., Schölkopf, B., Scheffler, K.

25th Annual Meeting and Exhibition of the International Society for Magnetic Resonance in Medicine (ISMRM), April 2017 (poster)

link (url) [BibTex]

link (url) [BibTex]


no image
Image Quality Improvement by Applying Retrospective Motion Correction on Quantitative Susceptibility Mapping and R2*

Feng, X., Loktyushin, A., Deistung, A., Reichenbach, J.

25th Annual Meeting and Exhibition of the International Society for Magnetic Resonance in Medicine (ISMRM), April 2017 (poster)

link (url) [BibTex]

link (url) [BibTex]


no image
Generalized phase locking analysis of electrophysiology data

Safavi, S., Panagiotaropoulos, T., Kapoor, V., Logothetis, N. K., Besserve, M.

ESI Systems Neuroscience Conference (ESI-SyNC 2017): Principles of Structural and Functional Connectivity, 2017 (poster)

[BibTex]

[BibTex]

2013


no image
Studying large-scale brain networks: electrical stimulation and neural-event-triggered fMRI

Logothetis, N., Eschenko, O., Murayama, Y., Augath, M., Steudel, T., Evrard, H., Besserve, M., Oeltermann, A.

Twenty-Second Annual Computational Neuroscience Meeting (CNS*2013), July 2013, journal = {BMC Neuroscience}, year = {2013}, month = {7}, volume = {14}, number = {Supplement 1}, pages = {A1}, (talk)

Web [BibTex]

2013

Web [BibTex]


no image
Coupling between spiking activity and beta band spatio-temporal patterns in the macaque PFC

Safavi, S., Panagiotaropoulos, T., Kapoor, V., Logothetis, N., Besserve, M.

43rd Annual Meeting of the Society for Neuroscience (Neuroscience), 2013 (poster)

[BibTex]

[BibTex]


no image
Gaussian Process Vine Copulas for Multivariate Dependence

Lopez-Paz, D., Hernandez-Lobato, J., Ghahramani, Z.

International Conference on Machine Learning (ICML), 2013 (poster)

PDF [BibTex]

PDF [BibTex]


no image
Domain Generalization via Invariant Feature Representation

Muandet, K., Balduzzi, D., Schölkopf, B.

30th International Conference on Machine Learning (ICML2013), 2013 (poster)

PDF [BibTex]

PDF [BibTex]


no image
Analyzing locking of spikes to spatio-temporal patterns in the macaque prefrontal cortex

Safavi, S., Panagiotaropoulos, T., Kapoor, V., Logothetis, N., Besserve, M.

Bernstein Conference, 2013 (poster)

DOI [BibTex]

DOI [BibTex]


no image
One-class Support Measure Machines for Group Anomaly Detection

Muandet, K., Schölkopf, B.

29th Conference on Uncertainty in Artificial Intelligence (UAI), 2013 (poster)

PDF [BibTex]

PDF [BibTex]


no image
The Randomized Dependence Coefficient

Lopez-Paz, D., Hennig, P., Schölkopf, B.

Neural Information Processing Systems (NIPS), 2013 (poster)

PDF [BibTex]

PDF [BibTex]


no image
Characterization of different types of sharp-wave ripple signatures in the CA1 of the macaque hippocampus

Ramirez-Villegas, J., Logothetis, N., Besserve, M.

4th German Neurophysiology PhD Meeting Networks, 2013 (poster)

Web [BibTex]

Web [BibTex]


no image
Domain Generalization via Invariant Feature Representation

Muandet, K.

30th International Conference on Machine Learning (ICML2013), 2013 (talk)

PDF [BibTex]

PDF [BibTex]

2008


no image
BCPy2000

Hill, N., Schreiner, T., Puzicha, C., Farquhar, J.

Workshop "Machine Learning Open-Source Software" at NIPS, December 2008 (talk)

Web [BibTex]

2008

Web [BibTex]


no image
Logistic Regression for Graph Classification

Shervashidze, N., Tsuda, K.

NIPS Workshop on "Structured Input - Structured Output" (NIPS SISO), December 2008 (talk)

Abstract
In this paper we deal with graph classification. We propose a new algorithm for performing sparse logistic regression for graphs, which is comparable in accuracy with other methods of graph classification and produces probabilistic output in addition. Sparsity is required for the reason of interpretability, which is often necessary in domains such as bioinformatics or chemoinformatics.

Web [BibTex]

Web [BibTex]


no image
New Projected Quasi-Newton Methods with Applications

Sra, S.

Microsoft Research Tech-talk, December 2008 (talk)

Abstract
Box-constrained convex optimization problems are central to several applications in a variety of fields such as statistics, psychometrics, signal processing, medical imaging, and machine learning. Two fundamental examples are the non-negative least squares (NNLS) problem and the non-negative Kullback-Leibler (NNKL) divergence minimization problem. The non-negativity constraints are usually based on an underlying physical restriction, for e.g., when dealing with applications in astronomy, tomography, statistical estimation, or image restoration, the underlying parameters represent physical quantities such as concentration, weight, intensity, or frequency counts and are therefore only interpretable with non-negative values. Several modern optimization methods can be inefficient for simple problems such as NNLS and NNKL as they are really designed to handle far more general and complex problems. In this work we develop two simple quasi-Newton methods for solving box-constrained (differentiable) convex optimization problems that utilize the well-known BFGS and limited memory BFGS updates. We position our method between projected gradient (Rosen, 1960) and projected Newton (Bertsekas, 1982) methods, and prove its convergence under a simple Armijo step-size rule. We illustrate our method by showing applications to: Image deblurring, Positron Emission Tomography (PET) image reconstruction, and Non-negative Matrix Approximation (NMA). On medium sized data we observe performance competitive to established procedures, while for larger data the results are even better.

PDF [BibTex]

PDF [BibTex]


no image
Variational Bayesian Model Selection in Linear Gaussian State-Space based Models

Chiappa, S.

International Workshop on Flexible Modelling: Smoothing and Robustness (FMSR 2008), 2008, pages: 1, November 2008 (poster)

Web [BibTex]

Web [BibTex]


no image
MR-Based PET Attenuation Correction: Initial Results for Whole Body

Hofmann, M., Steinke, F., Aschoff, P., Lichy, M., Brady, M., Schölkopf, B., Pichler, B.

Medical Imaging Conference, October 2008 (talk)

[BibTex]

[BibTex]


no image
Nonparametric Indepedence Tests: Space Partitioning and Kernel Approaches

Gretton, A., Györfi, L.

19th International Conference on Algorithmic Learning Theory (ALT08), October 2008 (talk)

PDF Web [BibTex]

PDF Web [BibTex]


no image
Data-driven goodness-of-fit tests

Langovoy, M.

2008 Barcelona Conference on Asymptotic Statistics (BAS), September 2008 (talk)

Web [BibTex]

Web [BibTex]


no image
Towards the neural basis of the flash-lag effect

Ecker, A., Berens, P., Hoenselaar, A., Subramaniyan, M., Tolias, A., Bethge, M.

International Workshop on Aspects of Adaptive Cortex Dynamics, 2008, pages: 1, September 2008 (poster)

PDF [BibTex]

PDF [BibTex]


no image
mGene: A Novel Discriminative Gene Finder

Schweikert, G., Zeller, G., Zien, A., Behr, J., Sonnenburg, S., Philips, P., Ong, C., Rätsch, G.

Worm Genomics and Systems Biology meeting, July 2008 (talk)

[BibTex]

[BibTex]


no image
Policy Learning: A Unified Perspective With Applications In Robotics

Peters, J., Kober, J., Nguyen-Tuong, D.

8th European Workshop on Reinforcement Learning for Robotics (EWRL 2008), 8, pages: 10, July 2008 (poster)

Abstract
Policy Learning approaches are among the best suited methods for high-dimensional, continuous control systems such as anthropomorphic robot arms and humanoid robots. In this paper, we show two contributions: firstly, we show a unified perspective which allows us to derive several policy learning al- gorithms from a common point of view, i.e, policy gradient algorithms, natural- gradient algorithms and EM-like policy learning. Secondly, we present several applications to both robot motor primitive learning as well as to robot control in task space. Results both from simulation and several different real robots are shown.

PDF [BibTex]

PDF [BibTex]


no image
Discovering Common Sequence Variation in Arabidopsis thaliana

Rätsch, G., Clark, R., Schweikert, G., Toomajian, C., Ossowski, S., Zeller, G., Shinn, P., Warthman, N., Hu, T., Fu, G., Hinds, D., Cheng, H., Frazer, K., Huson, D., Schölkopf, B., Nordborg, M., Ecker, J., Weigel, D., Schneeberger, K., Bohlen, A.

16th Annual International Conference Intelligent Systems for Molecular Biology (ISMB), July 2008 (talk)

Web [BibTex]

Web [BibTex]


no image
Coding Theory in Brain-Computer Interfaces

Martens, SMM.

Soria Summerschool on Computational Mathematics "Algebraic Coding Theory" (S3CM), July 2008 (talk)

Web [BibTex]

Web [BibTex]


no image
Motor Skill Learning for Cognitive Robotics

Peters, J.

6th International Cognitive Robotics Workshop (CogRob), July 2008 (talk)

Abstract
Autonomous robots that can assist humans in situations of daily life have been a long standing vision of robotics, artificial intelligence, and cognitive sciences. A first step towards this goal is to create robots that can learn tasks triggered by environmental context or higher level instruction. However, learning techniques have yet to live up to this promise as only few methods manage to scale to high-dimensional manipulator or humanoid robots. In this tutorial, we give a general overview on motor skill learning for cognitive robotics using research at ATR, USC, CMU and Max-Planck in order to illustrate the problems in motor skill learning. For doing so, we discuss task-appropriate representations and algorithms for learning robot motor skills. Among the topics are the learning basic movements or motor primitives by imitation and reinforcement learning, learning rhytmic and discrete movements, fast regression methods for learning inverse dynamics and setups for learning task-space policies. Examples on various robots, e.g., SARCOS DB, the SARCOS Master Arm, BDI Little Dog and a Barrett WAM, are shown and include Ball-in-a-Cup, T-Ball, Juggling, Devil-Sticking, Operational Space Control and many others.

Web [BibTex]

Web [BibTex]


no image
Reinforcement Learning of Perceptual Coupling for Motor Primitives

Kober, J., Peters, J.

8th European Workshop on Reinforcement Learning for Robotics (EWRL 2008), 8, pages: 16, July 2008 (poster)

Abstract
Reinforcement learning is a natural choice for the learning of complex motor tasks by reward-related self-improvement. As the space of movements is high-dimensional and continuous, a policy parametrization is needed which can be used in this context. Traditional motor primitive approaches deal largely with open-loop policies which can only deal with small perturbations. In this paper, we present a new type of motor primitive policies which serve as closed-loop policies together with an appropriate learning algorithm. Our new motor primitives are an augmented version version of the dynamic systems motor primitives that incorporates perceptual coupling to external variables. We show that these motor primitives can perform complex tasks such a Ball-in-a-Cup or Kendama task even with large variances in the initial conditions where a human would hardly be able to learn this task. We initialize the open-loop policies by imitation learning and the perceptual coupling with a handcrafted solution. We first improve the open-loop policies and subsequently the perceptual coupling using a novel reinforcement learning method which is particularly well-suited for motor primitives.

PDF [BibTex]

PDF [BibTex]


no image
Painless Embeddings of Distributions: the Function Space View (Part 1)

Fukumizu, K., Gretton, A., Smola, A.

25th International Conference on Machine Learning (ICML), July 2008 (talk)

Abstract
This tutorial will give an introduction to the recent understanding and methodology of the kernel method: dealing with higher order statistics by embedding painlessly random variables/probability distributions. In the early days of kernel machines research, the "kernel trick" was considered a useful way of constructing nonlinear algorithms from linear ones. More recently, however, it has become clear that a potentially more far reaching use of kernels is as a linear way of dealing with higher order statistics by embedding distributions in a suitable reproducing kernel Hilbert space (RKHS). Notably, unlike the straightforward expansion of higher order moments or conventional characteristic function approach, the use of kernels or RKHS provides a painless, tractable way of embedding distributions. This line of reasoning leads naturally to the questions: what does it mean to embed a distribution in an RKHS? when is this embedding injective (and thus, when do different distributions have unique mappings)? what implications are there for learning algorithms that make use of these embeddings? This tutorial aims at answering these questions. There are a great variety of applications in machine learning and computer science, which require distribution estimation and/or comparison.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Reinforcement Learning for Robotics

Peters, J.

8th European Workshop on Reinforcement Learning for Robotics (EWRL), July 2008 (talk)

Web [BibTex]

Web [BibTex]


no image
Flexible Models for Population Spike Trains

Bethge, M., Macke, J., Berens, P., Ecker, A., Tolias, A.

AREADNE 2008: Research in Encoding and Decoding of Neural Ensembles, 2, pages: 52, June 2008 (poster)

PDF [BibTex]

PDF [BibTex]


no image
Pairwise Correlations and Multineuronal Firing Patterns in the Primary Visual Cortex of the Awake, Behaving Macaque

Berens, P., Ecker, A., Subramaniyan, M., Macke, J., Hauck, P., Bethge, M., Tolias, A.

AREADNE 2008: Research in Encoding and Decoding of Neural Ensembles, 2, pages: 48, June 2008 (poster)

PDF [BibTex]

PDF [BibTex]


no image
Visual saliency re-visited: Center-surround patterns emerge as optimal predictors for human fixation targets

Wichmann, F., Kienzle, W., Schölkopf, B., Franz, M.

Journal of Vision, 8(6):635, 8th Annual Meeting of the Vision Sciences Society (VSS), June 2008 (poster)

Abstract
Humans perceives the world by directing the center of gaze from one location to another via rapid eye movements, called saccades. In the period between saccades the direction of gaze is held fixed for a few hundred milliseconds (fixations). It is primarily during fixations that information enters the visual system. Remarkably, however, after only a few fixations we perceive a coherent, high-resolution scene despite the visual acuity of the eye quickly decreasing away from the center of gaze: This suggests an effective strategy for selecting saccade targets. Top-down effects, such as the observer's task, thoughts, or intentions have an effect on saccadic selection. Equally well known is that bottom-up effects-local image structure-influence saccade targeting regardless of top-down effects. However, the question of what the most salient visual features are is still under debate. Here we model the relationship between spatial intensity patterns in natural images and the response of the saccadic system using tools from machine learning. This allows us to identify the most salient image patterns that guide the bottom-up component of the saccadic selection system, which we refer to as perceptive fields. We show that center-surround patterns emerge as the optimal solution to the problem of predicting saccade targets. Using a novel nonlinear system identification technique we reduce our learned classifier to a one-layer feed-forward network which is surprisingly simple compared to previously suggested models assuming more complex computations such as multi-scale processing, oriented filters and lateral inhibition. Nevertheless, our model is equally predictive and generalizes better to novel image sets. Furthermore, our findings are consistent with neurophysiological hardware in the superior colliculus. Bottom-up visual saliency may thus not be computed cortically as has been thought previously.

Web DOI [BibTex]

Web DOI [BibTex]


no image
Multi-Classification by Categorical Features via Clustering

Seldin, Y.

25th International Conference on Machine Learning (ICML), June 2008 (talk)

Abstract
We derive a generalization bound for multi-classification schemes based on grid clustering in categorical parameter product spaces. Grid clustering partitions the parameter space in the form of a Cartesian product of partitions for each of the parameters. The derived bound provides a means to evaluate clustering solutions in terms of the generalization power of a built-on classifier. For classification based on a single feature the bound serves to find a globally optimal classification rule. Comparison of the generalization power of individual features can then be used for feature ranking. Our experiments show that in this role the bound is much more precise than mutual information or normalized correlation indices.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Analysis of Pattern Recognition Methods in Classifying Bold Signals in Monkeys at 7-Tesla

Ku, S., Gretton, A., Macke, J., Tolias, A., Logothetis, N.

AREADNE 2008: Research in Encoding and Decoding of Neural Ensembles, 2, pages: 67, June 2008 (poster)

Abstract
Pattern recognition methods have shown that fMRI data can reveal significant information about brain activity. For example, in the debate of how object-categories are represented in the brain, multivariate analysis has been used to provide evidence of distributed encoding schemes. Many follow-up studies have employed different methods to analyze human fMRI data with varying degrees of success. In this study we compare four popular pattern recognition methods: correlation analysis, support-vector machines (SVM), linear discriminant analysis and Gaussian naïve Bayes (GNB), using data collected at high field (7T) with higher resolution than usual fMRI studies. We investigate prediction performance on single trials and for averages across varying numbers of stimulus presentations. The performance of the various algorithms depends on the nature of the brain activity being categorized: for several tasks, many of the methods work well, whereas for others, no methods perform above chance level. An important factor in overall classification performance is careful preprocessing of the data, including dimensionality reduction, voxel selection, and outlier elimination.

[BibTex]

[BibTex]


no image
Thin-Plate Splines Between Riemannian Manifolds

Steinke, F., Hein, M., Schölkopf, B.

Workshop on Geometry and Statistics of Shapes, June 2008 (talk)

Abstract
With the help of differential geometry we describe a framework to define a thin-plate spline like energy for maps between arbitrary Riemannian manifolds. The so-called Eells energy only depends on the intrinsic geometry of the input and output manifold, but not on their respective representation. The energy can then be used for regression between manifolds, we present results for cases where the outputs are rotations, sets of angles, or points on 3D surfaces. In the future we plan to also target regression where the output is an element of "shape space", understood as a Riemannian manifold. One could also further explore the meaning of the Eells energy when applied to diffeomorphisms between shapes, especially with regard to its potential use as a distance measure between shapes that does not depend on the embedding or the parametrisation of the shapes.

Web [BibTex]

Web [BibTex]


no image
Learning resolved velocity control

Peters, J.

2008 IEEE International Conference on Robotics and Automation (ICRA), May 2008 (talk)

Web [BibTex]

Web [BibTex]


no image
Bayesian methods for protein structure determination

Habeck, M.

Machine Learning in Structural Bioinformatics, April 2008 (talk)

Web [BibTex]

Web [BibTex]


no image
The role of stimulus correlations for population decoding in the retina

Schwartz, G., Macke, J., Berry, M.

Computational and Systems Neuroscience 2008 (COSYNE 2008), 5, pages: 172, March 2008 (poster)

PDF Web [BibTex]

PDF Web [BibTex]

2006


no image
Some observations on the pedestal effect or dipper function

Henning, B., Wichmann, F.

Journal of Vision, 6(13):50, 2006 Fall Vision Meeting of the Optical Society of America, December 2006 (poster)

Abstract
The pedestal effect is the large improvement in the detectabilty of a sinusoidal “signal” grating observed when the signal is added to a masking or “pedestal” grating of the same spatial frequency, orientation, and phase. We measured the pedestal effect in both broadband and notched noise - noise from which a 1.5-octave band centred on the signal frequency had been removed. Although the pedestal effect persists in broadband noise, it almost disappears in the notched noise. Furthermore, the pedestal effect is substantial when either high- or low-pass masking noise is used. We conclude that the pedestal effect in the absence of notched noise results principally from the use of information derived from channels with peak sensitivities at spatial frequencies different from that of the signal and pedestal. The spatial-frequency components of the notched noise above and below the spatial frequency of the signal and pedestal prevent the use of information about changes in contrast carried in channels tuned to spatial frequencies that are very much different from that of the signal and pedestal. Thus the pedestal or dipper effect measured without notched noise is not a characteristic of individual spatial-frequency tuned channels.

Web DOI [BibTex]

2006

Web DOI [BibTex]


no image
A Kernel Method for the Two-Sample-Problem

Gretton, A., Borgwardt, K., Rasch, M., Schölkopf, B., Smola, A.

20th Annual Conference on Neural Information Processing Systems (NIPS), December 2006 (talk)

Abstract
We propose two statistical tests to determine if two samples are from different distributions. Our test statistic is in both cases the distance between the means of the two samples mapped into a reproducing kernel Hilbert space (RKHS). The first test is based on a large deviation bound for the test statistic, while the second is based on the asymptotic distribution of this statistic. We show that the test statistic can be computed in $O(m^2)$ time. We apply our approach to a variety of problems, including attribute matching for databases using the Hungarian marriage method, where our test performs strongly. We also demonstrate excellent performance when comparing distributions over graphs, for which no alternative tests currently exist.

PDF [BibTex]

PDF [BibTex]


no image
Ab-initio gene finding using machine learning

Schweikert, G., Zeller, G., Zien, A., Ong, C., de Bona, F., Sonnenburg, S., Phillips, P., Rätsch, G.

NIPS Workshop on New Problems and Methods in Computational Biology, December 2006 (talk)

Web [BibTex]

Web [BibTex]


no image
Reinforcement Learning by Reward-Weighted Regression

Peters, J.

NIPS Workshop: Towards a New Reinforcement Learning? , December 2006 (talk)

Web [BibTex]

Web [BibTex]


no image
Graph boosting for molecular QSAR analysis

Saigo, H., Kadowaki, T., Kudo, T., Tsuda, K.

NIPS Workshop on New Problems and Methods in Computational Biology, December 2006 (talk)

Abstract
We propose a new boosting method that systematically combines graph mining and mathematical programming-based machine learning. Informative and interpretable subgraph features are greedily found by a series of graph mining calls. Due to our mathematical programming formulation, subgraph features and pre-calculated real-valued features are seemlessly integrated. We tested our algorithm on a quantitative structure-activity relationship (QSAR) problem, which is basically a regression problem when given a set of chemical compounds. In benchmark experiments, the prediction accuracy of our method favorably compared with the best results reported on each dataset.

Web [BibTex]

Web [BibTex]


no image
Inferring Causal Directions by Evaluating the Complexity of Conditional Distributions

Sun, X., Janzing, D., Schölkopf, B.

NIPS Workshop on Causality and Feature Selection, December 2006 (talk)

Abstract
We propose a new approach to infer the causal structure that has generated the observed statistical dependences among n random variables. The idea is that the factorization of the joint measure of cause and effect into P(cause)P(effect|cause) leads typically to simpler conditionals than non-causal factorizations. To evaluate the complexity of the conditionals we have tried two methods. First, we have compared them to those which maximize the conditional entropy subject to the observed first and second moments since we consider the latter as the simplest conditionals. Second, we have fitted the data with conditional probability measures being exponents of functions in an RKHS space and defined the complexity by a Hilbert-space semi-norm. Such a complexity measure has several properties that are useful for our purpose. We describe some encouraging results with both methods applied to real-world data. Moreover, we have combined constraint-based approaches to causal discovery (i.e., methods using only information on conditional statistical dependences) with our method in order to distinguish between causal hypotheses which are equivalent with respect to the imposed independences. Furthermore, we compare the performance to Bayesian approaches to causal inference.

Web [BibTex]


no image
Learning Optimal EEG Features Across Time, Frequency and Space

Farquhar, J., Hill, J., Schölkopf, B.

NIPS Workshop on Current Trends in Brain-Computer Interfacing, December 2006 (talk)

PDF Web [BibTex]

PDF Web [BibTex]


no image
Semi-Supervised Learning

Zien, A.

Advanced Methods in Sequence Analysis Lectures, November 2006 (talk)

Web [BibTex]

Web [BibTex]


no image
Optimizing Spatial Filters for BCI: Margin- and Evidence-Maximization Approaches

Farquhar, J., Hill, N., Schölkopf, B.

Challenging Brain-Computer Interfaces: MAIA Workshop 2006, pages: 1, November 2006 (poster)

Abstract
We present easy-to-use alternatives to the often-used two-stage Common Spatial Pattern + classifier approach for spatial filtering and classification of Event-Related Desychnronization signals in BCI. We report two algorithms that aim to optimize the spatial filters according to a criterion more directly related to the ability of the algorithms to generalize to unseen data. Both are based upon the idea of treating the spatial filter coefficients as hyperparameters of a kernel or covariance function. We then optimize these hyper-parameters directly along side the normal classifier parameters with respect to our chosen learning objective function. The two objectives considered are margin maximization as used in Support-Vector Machines and the evidence maximization framework used in Gaussian Processes. Our experiments assessed generalization error as a function of the number of training points used, on 9 BCI competition data sets and 5 offline motor imagery data sets measured in Tubingen. Both our approaches sho w consistent improvements relative to the commonly used CSP+linear classifier combination. Strikingly, the improvement is most significant in the higher noise cases, when either few trails are used for training, or with the most poorly performing subjects. This a reversal of the usual "rich get richer" effect in the development of CSP extensions, which tend to perform best when the signal is strong enough to accurately find their additional parameters. This makes our approach particularly suitable for clinical application where high levels of noise are to be expected.

PDF PDF [BibTex]

PDF PDF [BibTex]


no image
A Machine Learning Approach for Determining the PET Attenuation Map from Magnetic Resonance Images

Hofmann, M., Steinke, F., Judenhofer, M., Claussen, C., Schölkopf, B., Pichler, B.

IEEE Medical Imaging Conference, November 2006 (talk)

Abstract
A promising new combination in multimodality imaging is MR-PET, where the high soft tissue contrast of Magnetic Resonance Imaging (MRI) and the functional information of Positron Emission Tomography (PET) are combined. Although many technical problems have recently been solved, it is still an open problem to determine the attenuation map from the available MR scan, as the MR intensities are not directly related to the attenuation values. One standard approach is an atlas registration where the atlas MR image is aligned with the patient MR thus also yielding an attenuation image for the patient. We also propose another approach, which to our knowledge has not been tried before: Using Support Vector Machines we predict the attenuation value directly from the local image information. We train this well-established machine learning algorithm using small image patches. Although both approaches sometimes yielded acceptable results, they also showed their specific shortcomings: The registration often fails with large deformations whereas the prediction approach is problematic when the local image structure is not characteristic enough. However, the failures often do not coincide and integration of both information sources is promising. We therefore developed a combination method extending Support Vector Machines to use not only local image structure but also atlas registered coordinates. We demonstrate the strength of this combination approach on a number of examples.

[BibTex]

[BibTex]