Header logo is ei


2013


no image
Identification of stimulus cues in narrow-band tone-in-noise detection using sparse observer models

Schönfelder, V., Wichmann, F.

Journal of the Acoustical Society of America, 134(1):447-463, 2013 (article)

DOI [BibTex]

2013

DOI [BibTex]


no image
Probabilistic Model-based Imitation Learning

Englert, P., Paraschos, A., Peters, J., Deisenroth, M.

Adaptive Behavior Journal, 21(5):388-403, 2013 (article)

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Metabolic cost as an organizing principle for cooperative learning

Balduzzi, D., Ortega, P., Besserve, M.

Advances in Complex Systems, 16(02n03):1350012, 2013 (article)

Web DOI [BibTex]

Web DOI [BibTex]


no image
MR-based PET Attenuation Correction for PET/MR Imaging

Bezrukov, I., Mantlik, F., Schmidt, H., Schölkopf, B., Pichler, B.

Seminars in Nuclear Medicine, 43(1):45-59, 2013 (article)

DOI [BibTex]

DOI [BibTex]


no image
MR-based Attenuation Correction Methods for Improved PET Quantification in Lesions within Bone and Susceptibility Artifact Regions

Bezrukov, I., Schmidt, H., Mantlik, F., Schwenzer, N., Brendle, C., Schölkopf, B., Pichler, B.

Journal of Nuclear Medicine, 54(10):1768-1774, 2013 (article)

Abstract
Hybrid PET/MR systems have recently entered clinical practice. Thus, the accuracy of MR-based attenuation correction in simultaneously acquired data can now be investigated. We assessed the accuracy of 4 methods of MR-based attenuation correction in lesions within soft tissue, bone, and MR susceptibility artifacts: 2 segmentation-based methods (SEG1, provided by the manufacturer, and SEG2, a method with atlas-based susceptibility artifact correction); an atlas- and pattern recognition–based method (AT&PR), which also used artifact correction; and a new method combining AT&PR and SEG2 (SEG2wBONE). Methods: Attenuation maps were calculated for the PET/MR datasets of 10 patients acquired on a whole-body PET/MR system, allowing for simultaneous acquisition of PET and MR data. Eighty percent iso-contour volumes of interest were placed on lesions in soft tissue (n = 21), in bone (n = 20), near bone (n = 19), and within or near MR susceptibility artifacts (n = 9). Relative mean volume-of-interest differences were calculated with CT-based attenuation correction as a reference. Results: For soft-tissue lesions, none of the methods revealed a significant difference in PET standardized uptake value relative to CT-based attenuation correction (SEG1, −2.6% ± 5.8%; SEG2, −1.6% ± 4.9%; AT&PR, −4.7% ± 6.5%; SEG2wBONE, 0.2% ± 5.3%). For bone lesions, underestimation of PET standardized uptake values was found for all methods, with minimized error for the atlas-based approaches (SEG1, −16.1% ± 9.7%; SEG2, −11.0% ± 6.7%; AT&PR, −6.6% ± 5.0%; SEG2wBONE, −4.7% ± 4.4%). For lesions near bone, underestimations of lower magnitude were observed (SEG1, −12.0% ± 7.4%; SEG2, −9.2% ± 6.5%; AT&PR, −4.6% ± 7.8%; SEG2wBONE, −4.2% ± 6.2%). For lesions affected by MR susceptibility artifacts, quantification errors could be reduced using the atlas-based artifact correction (SEG1, −54.0% ± 38.4%; SEG2, −15.0% ± 12.2%; AT&PR, −4.1% ± 11.2%; SEG2wBONE, 0.6% ± 11.1%). Conclusion: For soft-tissue lesions, none of the evaluated methods showed statistically significant errors. For bone lesions, significant underestimations of −16% and −11% occurred for methods in which bone tissue was ignored (SEG1 and SEG2). In the present attenuation correction schemes, uncorrected MR susceptibility artifacts typically result in reduced attenuation values, potentially leading to highly reduced PET standardized uptake values, rendering lesions indistinguishable from background. While AT&PR and SEG2wBONE show accurate results in both soft tissue and bone, SEG2wBONE uses a two-step approach for tissue classification, which increases the robustness of prediction and can be applied retrospectively if more precision in bone areas is needed.

Web DOI [BibTex]

Web DOI [BibTex]


no image
Learning output kernels for multi-task problems

Dinuzzo, F.

Neurocomputing, 118, pages: 119-126, 2013 (article)

DOI [BibTex]

DOI [BibTex]


no image
Analytical probabilistic modeling for radiation therapy treatment planning

Bangert, M., Hennig, P., Oelfke, U.

Physics in Medicine and Biology, 58(16):5401-5419, 2013 (article)

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Imaging Findings and Therapy Response Monitoring in Chronic Sclerodermatous Graft-Versus-Host Disease: Preliminary Data of a Simultaneous PET/MRI Approach

Sauter, A., Schmidt, H., Mantlik, F., Kolb, A., Federmann, B., Pfannenberg, C., Reimold, M., Pichler, B., Bethge, W., Horger, M.

Clinical Nuclear Medicine, 38(8):e309-e317, 2013 (article)

Abstract
PURPOSE: Our objective was a multifunctional imaging approach of chronic sclerodermatous graft-versus-host disease (ScGVHD) and its course during therapy using PET/MRI. METHODS: We performed partial-body PET/CT and PET/MRI of the calf in 6 consecutively recruited patients presenting with severe ScGVHD. The patients were treated with different immunosuppressive regimens and supportive therapies. PET/CT scanning started 60.5 +/- 3.3 minutes, PET/MRI imaging 139.5 +/- 16.7 minutes after F-FDG application. MRI acquisition included T1- (precontrast and postcontrast) and T2-weighted sequences. SUVmean, T1 contrast enhancement, and T2 signal intensity from region-of-interest analysis were calculated for different fascial and muscular compartments. In addition, musculoskeletal MRI findings and the modified Rodnan skin score were assessed. All patients underwent imaging follow-up. RESULTS: At baseline PET/MRI, ScGVHD-related musculoskeletal abnormalities consisted of increased signal and/or thickening of involved anatomical structures on T2-weighted and T1 postcontrast images as well as an increased FDG uptake. At follow-up, ScGVHD-related imaging findings decreased (SUVmean n = 4, mean T1 contrast enhancement n = 5, mean T2 signal intensity n = 3) or progressed (SUVmean n = 3, mean T1 contrast enhancement n = 2, mean T2 signal intensity n = 4). Clinically modified Rodnan skin score improved for 5 follow-ups and progressed for 2. SUVmean values correlated between PET/CT and PET/MRI acquisition (r = 0.660, P = 0.014), T1 contrast enhancement, and T2 signal (r = 0.668, P = 0.012), but not between the SUVmean values and the MRI parameters. CONCLUSIONS: PET/MRI as a combined morphological and functional technique seems to assess the inflammatory processes from different points of view and provides therefore in part complementary information

Web [BibTex]

Web [BibTex]


no image
A Survey on Policy Search for Robotics, Foundations and Trends in Robotics

Deisenroth, M., Neumann, G., Peters, J.

Foundations and Trends in Robotics, 2(1-2):1-142, 2013 (article)

DOI [BibTex]

DOI [BibTex]


no image
Reinforcement Learning in Robotics: A Review

Kober, J., Bagnell, D., Peters, J.

International Journal of Robotics Research, 32(11):1238–1274, 2013 (article)

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Multimodal information improves the rapid detection of mental fatigue

Laurent, F., Valderrama, M., Besserve, M., Guillard, M., Lachaux, J., Martinerie, J., Florence, G.

Biomedical Signal Processing and Control, 8(4):400 - 408, 2013 (article)

Web DOI [BibTex]

Web DOI [BibTex]


no image
Interactive Domain Adaptation for the Classification of Remote Sensing Images using Active Learning

Persello, C.

IEEE Geoscience and Remote Sensing Letters, 10(4):736-740, 2013 (article)

DOI [BibTex]


no image
Learning to Select and Generalize Striking Movements in Robot Table Tennis

Mülling, K., Kober, J., Kroemer, O., Peters, J.

International Journal of Robotics Research, 32(3):263-279, 2013 (article)

PDF DOI [BibTex]


no image
HiFiVE: A Hilbert Space Embedding of Fiber Variability Estimates for Uncertainty Modeling and Visualization

Schultz, T., Schlaffke, L., Schölkopf, B., Schmidt-Wilcke, T.

Computer Graphics Forum, 32(3):121-130, (Editors: B Preim, P Rheingans, and H Theisel), Blackwell Publishing, Oxford, UK, Eurographics Conference on Visualization (EuroVis), 2013 (article)

DOI [BibTex]

DOI [BibTex]


no image
Detection and attribution of large spatiotemporal extreme events in Earth observation data

Zscheischler, J., Mahecha, M., Harmeling, S., Reichstein, M.

Ecological Informatics, 15, pages: 66-73, 2013 (article)

Abstract
Latest climate projections suggest that both frequency and intensity of climate extremes will be substantially modified over the course of the coming decades. As a consequence, we need to understand to what extent and via which pathways climate extremes affect the state and functionality of terrestrial ecosystems and the associated biogeochemical cycles on a global scale. So far the impacts of climate extremes on the terrestrial biosphere were mainly investigated on the basis of case studies, while global assessments are widely lacking. In order to facilitate global analysis of this kind, we present a methodological framework that firstly detects spatiotemporally contiguous extremes in Earth observations, and secondly infers the likely pathway of the preceding climate anomaly. The approach does not require long time series, is computationally fast, and easily applicable to a variety of data sets with different spatial and temporal resolutions. The key element of our analysis strategy is to directly search in the relevant observations for spatiotemporally connected components exceeding a certain percentile threshold. We also put an emphasis on characterization of extreme event distribution, and scrutinize the attribution issue. We exemplify the analysis strategy by exploring the fraction of absorbed photosynthetically active radiation (fAPAR) from 1982 to 2011. Our results suggest that the hot spots of extremes in fAPAR lie in Northeastern Brazil, Southeastern Australia, Kenya and Tanzania. Moreover, we demonstrate that the size distribution of extremes follow a distinct power law. The attribution framework reveals that extremes in fAPAR are primarily driven by phases of water scarcity.

Web DOI [BibTex]

Web DOI [BibTex]


no image
Simultaneous PET/MR reveals Brain Function in Activated and Resting State on Metabolic, Hemodynamic and Multiple Temporal Scales

Wehrl, H., Hossain, M., Lankes, K., Liu, C., Bezrukov, I., Martirosian, P., Schick, F., Reischl, G., Pichler, B.

Nature Medicine, 19, pages: 1184–1189, 2013 (article)

Abstract
Combined positron emission tomography (PET) and magnetic resonance imaging (MRI) is a new tool to study functional processes in the brain. Here we study brain function in response to a barrel-field stimulus simultaneously using PET, which traces changes in glucose metabolism on a slow time scale, and functional MRI (fMRI), which assesses fast vascular and oxygenation changes during activation. We found spatial and quantitative discrepancies between the PET and the fMRI activation data. The functional connectivity of the rat brain was assessed by both modalities: the fMRI approach determined a total of nine known neural networks, whereas the PET method identified seven glucose metabolism–related networks. These results demonstrate the feasibility of combined PET-MRI for the simultaneous study of the brain at activation and rest, revealing comprehensive and complementary information to further decode brain function and brain networks.

Web DOI [BibTex]

Web DOI [BibTex]


no image
Finding Potential Support Vectors in Separable Classification Problems

Varagnolo, D., Del Favero, S., Dinuzzo, F., Schenato, L., Pillonetto, G.

IEEE Transactions on Neural Networks and Learning Systems, 24(11):1799-1813, 2013 (article)

DOI [BibTex]

DOI [BibTex]


no image
Open-Box Spectral Clustering: Applications to Medical Image Analysis

Schultz, T., Kindlmann, G.

IEEE Transactions on Visualization and Computer Graphics, 19(12):2100-2108, 2013 (article)

DOI [BibTex]

DOI [BibTex]


no image
im3shape: a maximum likelihood galaxy shear measurement code for cosmic gravitational lensing

Zuntz, J., Kacprzak, T., Voigt, L., Hirsch, M., Rowe, B., Bridle, S.

Monthly Notices of the Royal Astronomical Society, 434(2):1604-1618, Oxford University Press, 2013 (article)

DOI [BibTex]

DOI [BibTex]


no image
Accurate detection of differential RNA processing

Drewe, P., Stegle, O., Hartmann, L., Kahles, A., Bohnert, R., Wachter, A., Borgwardt, K. M., Rätsch, G.

Nucleic Acids Research, 41(10):5189-5198, 2013 (article)

DOI [BibTex]

DOI [BibTex]


no image
Detecting regulatory gene–environment interactions with unmeasured environmental factors

Fusi, N., Lippert, C., Borgwardt, K. M., Lawrence, N. D., Stegle, O.

Bioinformatics, 29(11):1382-1389, 2013 (article)

DOI [BibTex]

DOI [BibTex]


no image
Fragmentation of Slow Wave Sleep after Onset of Complete Locked-In State

Soekadar, S. R., Born, J., Birbaumer, N., Bensch, M., Halder, S., Murguialday, A. R., Gharabaghi, A., Nijboer, F., Schölkopf, B., Martens, S.

Journal of Clinical Sleep Medicine, 9(9):951-953, 2013 (article)

DOI [BibTex]

DOI [BibTex]


no image
Structural learning

Braun, D

Scholarpedia, 8(10):12312, October 2013 (article)

Abstract
Structural learning in motor control refers to a metalearning process whereby an agent extracts (abstract) invariants from its sensorimotor stream when experiencing a range of environments that share similar structure. Such invariants can then be exploited for faster generalization and learning-to-learn when experiencing novel, but related task environments.

DOI [BibTex]

DOI [BibTex]


no image
The effect of model uncertainty on cooperation in sensorimotor interactions

Grau-Moya, J, Hez, E, Pezzulo, G, Braun, DA

Journal of the Royal Society Interface, 10(87):1-11, October 2013 (article)

Abstract
Decision-makers have been shown to rely on probabilistic models for perception and action. However, these models can be incorrect or partially wrong in which case the decision-maker has to cope with model uncertainty. Model uncertainty has recently also been shown to be an important determinant of sensorimotor behaviour in humans that can lead to risk-sensitive deviations from Bayes optimal behaviour towards worst-case or best-case outcomes. Here, we investigate the effect of model uncertainty on cooperation in sensorimotor interactions similar to the stag-hunt game, where players develop models about the other player and decide between a pay-off-dominant cooperative solution and a risk-dominant, non-cooperative solution. In simulations, we show that players who allow for optimistic deviations from their opponent model are much more likely to converge to cooperative outcomes. We also implemented this agent model in a virtual reality environment, and let human subjects play against a virtual player. In this game, subjects' pay-offs were experienced as forces opposing their movements. During the experiment, we manipulated the risk sensitivity of the computer player and observed human responses. We found not only that humans adaptively changed their level of cooperation depending on the risk sensitivity of the computer player but also that their initial play exhibited characteristic risk-sensitive biases. Our results suggest that model uncertainty is an important determinant of cooperation in two-player sensorimotor interactions.

DOI [BibTex]

DOI [BibTex]


no image
Thermodynamics as a theory of decision-making with information-processing costs

Ortega, PA, Braun, DA

Proceedings of the Royal Society of London A, 469(2153):1-18, May 2013 (article)

Abstract
Perfectly rational decision-makers maximize expected utility, but crucially ignore the resource costs incurred when determining optimal actions. Here, we propose a thermodynamically inspired formalization of bounded rational decision-making where information processing is modelled as state changes in thermodynamic systems that can be quantified by differences in free energy. By optimizing a free energy, bounded rational decision-makers trade off expected utility gains and information-processing costs measured by the relative entropy. As a result, the bounded rational decision-making problem can be rephrased in terms of well-known variational principles from statistical physics. In the limit when computational costs are ignored, the maximum expected utility principle is recovered. We discuss links to existing decision-making frameworks and applications to human decision-making experiments that are at odds with expected utility theory. Since most of the mathematical machinery can be borrowed from statistical physics, the main contribution is to re-interpret the formalism of thermodynamic free-energy differences in terms of bounded rational decision-making and to discuss its relationship to human decision-making experiments.

DOI [BibTex]

DOI [BibTex]

2010


no image
Causal relationships between frequency bands of extracellular signals in visual cortex revealed by an information theoretic analysis

Besserve, M., Schölkopf, B., Logothetis, N., Panzeri, S.

Journal of Computational Neuroscience, 29(3):547-566, December 2010 (article)

PDF DOI [BibTex]

2010


no image
Tackling Box-Constrained Optimization via a New Projected Quasi-Newton Approach

Kim, D., Sra, S., Dhillon, I.

SIAM Journal on Scientific Computing, 32(6):3548-3563 , December 2010 (article)

Abstract
Numerous scientific applications across a variety of fields depend on box-constrained convex optimization. Box-constrained problems therefore continue to attract research interest. We address box-constrained (strictly convex) problems by deriving two new quasi-Newton algorithms. Our algorithms are positioned between the projected-gradient [J. B. Rosen, J. SIAM, 8 (1960), pp. 181–217] and projected-Newton [D. P. Bertsekas, SIAM J. Control Optim., 20 (1982), pp. 221–246] methods. We also prove their convergence under a simple Armijo step-size rule. We provide experimental results for two particular box-constrained problems: nonnegative least squares (NNLS), and nonnegative Kullback–Leibler (NNKL) minimization. For both NNLS and NNKL our algorithms perform competitively as compared to well-established methods on medium-sized problems; for larger problems our approach frequently outperforms the competition.

Web DOI [BibTex]

Web DOI [BibTex]


no image
Algorithmen zum Automatischen Erlernen von Motorfähigkeiten

Peters, J., Kober, J., Schaal, S.

at - Automatisierungstechnik, 58(12):688-694, December 2010 (article)

Abstract
Robot learning methods which allow autonomous robots to adapt to novel situations have been a long standing vision of robotics, artificial intelligence, and cognitive sciences. However, to date, learning techniques have yet to fulfill this promise as only few methods manage to scale into the high-dimensional domains of manipulator robotics, or even the new upcoming trend of humanoid robotics. If possible, scaling was usually only achieved in precisely pre-structured domains. In this paper, we investigate the ingredients for a general approach policy learning with the goal of an application to motor skill refinement in order to get one step closer towards human-like performance. For doing so, we study two major components for such an approach, i. e., firstly, we study policy learning algorithms which can be applied in the general setting of motor skill learning, and, secondly, we study a theoretically well-founded general approach to representing the required control structures for task representation and execution.

Web DOI [BibTex]

Web DOI [BibTex]


no image
PAC-Bayesian Analysis of Co-clustering and Beyond

Seldin, Y., Tishby, N.

Journal of Machine Learning Research, 11, pages: 3595-3646, December 2010 (article)

PDF PDF [BibTex]

PDF PDF [BibTex]


no image
Gaussian Processes for Machine Learning (GPML) Toolbox

Rasmussen, C., Nickisch, H.

Journal of Machine Learning Research, 11, pages: 3011-3015, November 2010 (article)

Abstract
The GPML toolbox provides a wide range of functionality for Gaussian process (GP) inference and prediction. GPs are specified by mean and covariance functions; we offer a library of simple mean and covariance functions and mechanisms to compose more complex ones. Several likelihood functions are supported including Gaussian and heavy-tailed for regression as well as others suitable for classification. Finally, a range of inference methods is provided, including exact and variational inference, Expectation Propagation, and Laplace's method dealing with non-Gaussian likelihoods and FITC for dealing with large regression tasks.

Web [BibTex]

Web [BibTex]


no image
Cryo-EM structure and rRNA model of a translating eukaryotic 80S ribosome at 5.5-Å resolution

Armache, J-P., Jarasch, A., Anger, AM., Villa, E., Becker, T., Bhushan, S., Jossinet, F., Habeck, M., Dindar, G., Franckenberg, S., Marquez, V., Mielke, T., Thomm, M., Berninghausen, O., Beatrix, B., Söding, J., Westhof, E., Wilson, DN., Beckmann, R.

Proceedings of the National Academy of Sciences of the United States of America, 107(46):19748-19753, November 2010 (article)

Abstract
Protein biosynthesis, the translation of the genetic code into polypeptides, occurs on ribonucleoprotein particles called ribosomes. Although X-ray structures of bacterial ribosomes are available, high-resolution structures of eukaryotic 80S ribosomes are lacking. Using cryoelectron microscopy and single-particle reconstruction, we have determined the structure of a translating plant (Triticum aestivum) 80S ribosome at 5.5-Å resolution. This map, together with a 6.1-Å map of a Saccharomyces cerevisiae 80S ribosome, has enabled us to model ∼98% of the rRNA. Accurate assignment of the rRNA expansion segments (ES) and variable regions has revealed unique ES–ES and r-protein–ES interactions, providing insight into the structure and evolution of the eukaryotic ribosome.

Web DOI [BibTex]

Web DOI [BibTex]


no image
Policy gradient methods

Peters, J.

Scholarpedia, 5(11):3698, November 2010 (article)

Abstract
Policy gradient methods are a type of reinforcement learning techniques that rely upon optimizing parametrized policies with respect to the expected return (long-term cumulative reward) by gradient descent. They do not suffer from many of the problems that have been marring traditional reinforcement learning approaches such as the lack of guarantees of a value function, the intractability problem resulting from uncertain state information and the complexity arising from continuous states & actions.

Web DOI [BibTex]

Web DOI [BibTex]


no image
Localization of eukaryote-specific ribosomal proteins in a 5.5-Å cryo-EM map of the 80S eukaryotic ribosome

Armache, J-P., Jarasch, A., Anger, AM., Villa, E., Becker, T., Bhushan, S., Jossinet, F., Habeck, M., Dindar, G., Franckenberg, S., Marquez, V., Mielke, T., Thomm, M., Berninghausen, O., Beatrix, B., Söding, J., Westhof, E., Wilson, DN., Beckmann, R.

Proceedings of the National Academy of Sciences of the United States of America, 107(46):19754-19759, November 2010 (article)

Abstract
Protein synthesis in all living organisms occurs on ribonucleoprotein particles, called ribosomes. Despite the universality of this process, eukaryotic ribosomes are significantly larger in size than their bacterial counterparts due in part to the presence of 80 r proteins rather than 54 in bacteria. Using cryoelectron microscopy reconstructions of a translating plant (Triticum aestivum) 80S ribosome at 5.5-Å resolution, together with a 6.1-Å map of a translating Saccharomyces cerevisiae 80S ribosome, we have localized and modeled 74/80 (92.5%) of the ribosomal proteins, encompassing 12 archaeal/eukaryote-specific small subunit proteins as well as the complete complement of the ribosomal proteins of the eukaryotic large subunit. Near-complete atomic models of the 80S ribosome provide insights into the structure, function, and evolution of the eukaryotic translational apparatus.

Web DOI [BibTex]

Web DOI [BibTex]


no image
Spatio-Spectral Remote Sensing Image Classification With Graph Kernels

Camps-Valls, G., Shervashidze, N., Borgwardt, K.

IEEE Geoscience and Remote Sensing Letters, 7(4):741-745, October 2010 (article)

Abstract
This letter presents a graph kernel for spatio-spectral remote sensing image classification with support vector machines (SVMs). The method considers higher order relations in the neighborhood (beyond pairwise spatial relations) to iteratively compute a kernel matrix for SVM learning. The proposed kernel is easy to compute and constitutes a powerful alternative to existing approaches. The capabilities of the method are illustrated in several multi- and hyperspectral remote sensing images acquired over both urban and agricultural areas.

Web DOI [BibTex]

Web DOI [BibTex]


no image
Causal Inference Using the Algorithmic Markov Condition

Janzing, D., Schölkopf, B.

IEEE Transactions on Information Theory, 56(10):5168-5194, October 2010 (article)

Abstract
Inferring the causal structure that links $n$ observables is usually based upon detecting statistical dependences and choosing simple graphs that make the joint measure Markovian. Here we argue why causal inference is also possible when the sample size is one. We develop a theory how to generate causal graphs explaining similarities between single objects. To this end, we replace the notion of conditional stochastic independence in the causal Markov condition with the vanishing of conditional algorithmic mutual information and describe the corresponding causal inference rules. We explain why a consistent reformulation of causal inference in terms of algorithmic complexity implies a new inference principle that takes into account also the complexity of conditional probability densities, making it possible to select among Markov equivalent causal graphs. This insight provides a theoretical foundation of a heuristic principle proposed in earlier work. We also sketch some ideas on how to replace Kolmogorov complexity with decidable complexity criteria. This can be seen as an algorithmic analog of replacing the empirically undecidable question of statistical independence with practical independence tests that are based on implicit or explicit assumptions on the underlying distribution.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Recurrent Policy Gradients

Wierstra, D., Förster, A., Peters, J., Schmidhuber, J.

Logic Journal of the IGPL, 18(5):620-634, October 2010 (article)

Abstract
Reinforcement learning for partially observable Markov decision problems (POMDPs) is a challenge as it requires policies with an internal state. Traditional approaches suffer significantly from this shortcoming and usually make strong assumptions on the problem domain such as perfect system models, state-estimators and a Markovian hidden system. Recurrent neural networks (RNNs) offer a natural framework for dealing with policy learning using hidden state and require only few limiting assumptions. As they can be trained well using gradient descent, they are suited for policy gradient approaches. In this paper, we present a policy gradient method, the Recurrent Policy Gradient which constitutes a model-free reinforcement learning method. It is aimed at training limited-memory stochastic policies on problems which require long-term memories of past observations. The approach involves approximating a policy gradient for a recurrent neural network by backpropagating return-weighted characteristic eligibilities through time. Using a ‘‘Long Short-Term Memory’’ RNN architecture, we are able to outperform previous RL methods on three important benchmark tasks. Furthermore, we show that using history-dependent baselines helps reducing estimation variance significantly, thus enabling our approach to tackle more challenging, highly stochastic environments.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Discriminative frequent subgraph mining with optimality guarantees

Thoma, M., Cheng, H., Gretton, A., Han, J., Kriegel, H., Smola, A., Song, L., Yu, P., Yan, X., Borgwardt, K.

Journal of Statistical Analysis and Data Mining, 3(5):302–318, October 2010 (article)

Abstract
The goal of frequent subgraph mining is to detect subgraphs that frequently occur in a dataset of graphs. In classification settings, one is often interested in discovering discriminative frequent subgraphs, whose presence or absence is indicative of the class membership of a graph. In this article, we propose an approach to feature selection on frequent subgraphs, called CORK, that combines two central advantages. First, it optimizes a submodular quality criterion, which means that we can yield a near-optimal solution using greedy feature selection. Second, our submodular quality function criterion can be integrated into gSpan, the state-of-the-art tool for frequent subgraph mining, and help to prune the search space for discriminative frequent subgraphs even during frequent subgraph mining.

Web DOI [BibTex]

Web DOI [BibTex]


no image
Combining active learning and reactive control for robot grasping

Kroemer, O., Detry, R., Piater, J., Peters, J.

Robotics and Autonomous Systems, 58(9):1105-1116, September 2010 (article)

Abstract
Grasping an object is a task that inherently needs to be treated in a hybrid fashion. The system must decide both where and how to grasp the object. While selecting where to grasp requires learning about the object as a whole, the execution only needs to reactively adapt to the context close to the grasp’s location. We propose a hierarchical controller that reflects the structure of these two sub-problems, and attempts to learn solutions that work for both. A hybrid architecture is employed by the controller to make use of various machine learning methods that can cope with the large amount of uncertainty inherent to the task. The controller’s upper level selects where to grasp the object using a reinforcement learner, while the lower level comprises an imitation learner and a vision-based reactive controller to determine appropriate grasping motions. The resulting system is able to quickly learn good grasps of a novel object in an unstructured environment, by executing smooth reaching motions and preshapin g the hand depending on the object’s geometry. The system was evaluated both in simulation and on a real robot.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Nonparametric Regression between General Riemannian Manifolds

Steinke, F., Hein, M., Schölkopf, B.

SIAM Journal on Imaging Sciences, 3(3):527-563, September 2010 (article)

Abstract
We study nonparametric regression between Riemannian manifolds based on regularized empirical risk minimization. Regularization functionals for mappings between manifolds should respect the geometry of input and output manifold and be independent of the chosen parametrization of the manifolds. We define and analyze the three most simple regularization functionals with these properties and present a rather general scheme for solving the resulting optimization problem. As application examples we discuss interpolation on the sphere, fingerprint processing, and correspondence computations between three-dimensional surfaces. We conclude with characterizing interesting and sometimes counterintuitive implications and new open problems that are specific to learning between Riemannian manifolds and are not encountered in multivariate regression in Euclidean space.

Web DOI [BibTex]

Web DOI [BibTex]


no image
Hybrid PET/MRI of Intracranial Masses: Initial Experiences and Comparison to PET/CT

Boss, A., Bisdas, S., Kolb, A., Hofmann, M., Ernemann, U., Claussen, C., Pfannenberg, C., Pichler, B., Reimold, M., Stegger, L.

Journal of Nuclear Medicine, 51(8):1198-1205, August 2010 (article)

Web DOI [BibTex]

Web DOI [BibTex]


no image
libDAI: A Free and Open Source C++ Library for Discrete Approximate Inference in Graphical Models

Mooij, JM.

Journal of Machine Learning Research, 11, pages: 2169-2173, August 2010 (article)

Abstract
This paper describes the software package libDAI, a free & open source C++ library that provides implementations of various exact and approximate inference methods for graphical models with discrete-valued variables. libDAI supports directed graphical models (Bayesian networks) as well as undirected ones (Markov random fields and factor graphs). It offers various approximations of the partition sum, marginal probability distributions and maximum probability states. Parameter learning is also supported. A feature comparison with other open source software packages for approximate inference is given. libDAI is licensed under the GPL v2+ license and is available at http://www.libdai.org.

PDF PDF [BibTex]

PDF PDF [BibTex]


no image
Convolutive blind source separation by efficient blind deconvolution and minimal filter distortion

Zhang, K., Chan, L.

Neurocomputing, 73(13-15):2580-2588, August 2010 (article)

Abstract
Convolutive blind source separation (BSS) usually encounters two difficulties—the filter indeterminacy in the recovered sources and the relatively high computational load. In this paper we propose an efficient method to convolutive BSS, by dealing with these two issues. It consists of two stages, namely, multichannel blind deconvolution (MBD) and learning the post-filters with the minimum filter distortion (MFD) principle. We present a computationally efficient approach to MBD in the first stage: a vector autoregression (VAR) model is first fitted to the data, admitting a closed-form solution and giving temporally independent errors; traditional independent component analysis (ICA) is then applied to these errors to produce the MBD results. In the second stage, the least linear reconstruction error (LLRE) constraint of the separation system, which was previously used to regularize the solutions to nonlinear ICA, enforces a MFD principle of the estimated mixing system for convolutive BSS. One can then easily learn the post-filters to preserve the temporal structure of the sources. We show that with this principle, each recovered source is approximately the principal component of the contributions of this source to all observations. Experimental results on both synthetic data and real room recordings show the good performance of this method.

PDF PDF DOI [BibTex]


no image
Biased Feedback in Brain-Computer Interfaces

Barbero, A., Grosse-Wentrup, M.

Journal of NeuroEngineering and Rehabilitation, 7(34):1-4, July 2010 (article)

Abstract
Even though feedback is considered to play an important role in learning how to operate a brain-computer interface (BCI), to date no significant influence of feedback design on BCI-performance has been reported in literature. In this work, we adapt a standard motor-imagery BCI-paradigm to study how BCI-performance is affected by biasing the belief subjects have on their level of control over the BCI system. Our findings indicate that subjects already capable of operating a BCI are impeded by inaccurate feedback, while subjects normally performing on or close to chance level may actually benefit from an incorrect belief on their performance level. Our results imply that optimal feedback design in BCIs should take into account a subject‘s current skill level.

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Varieties of Justification in Machine Learning

Corfield, D.

Minds and Machines, 20(2):291-301, July 2010 (article)

Abstract
Forms of justification for inductive machine learning techniques are discussed and classified into four types. This is done with a view to introduce some of these techniques and their justificatory guarantees to the attention of philosophers, and to initiate a discussion as to whether they must be treated separately or rather can be viewed consistently from within a single framework.

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Dirichlet Process Gaussian Mixture Models: Choice of the Base Distribution

Görür, D., Rasmussen, C.

Journal of Computer Science and Technology, 25(4):653-664, July 2010 (article)

Abstract
In the Bayesian mixture modeling framework it is possible to infer the necessary number of components to model the data and therefore it is unnecessary to explicitly restrict the number of components. Nonparametric mixture models sidestep the problem of finding the “correct” number of mixture components by assuming infinitely many components. In this paper Dirichlet process mixture (DPM) models are cast as infinite mixture models and inference using Markov chain Monte Carlo is described. The specification of the priors on the model parameters is often guided by mathematical and practical convenience. The primary goal of this paper is to compare the choice of conjugate and non-conjugate base distributions on a particular class of DPM models which is widely used in applications, the Dirichlet process Gaussian mixture model (DPGMM). We compare computational efficiency and modeling performance of DPGMM defined using a conjugate and a conditionally conjugate base distribution. We show that better density models can result from using a wider class of priors with no or only a modest increase in computational effort.

PDF PDF DOI [BibTex]

PDF PDF DOI [BibTex]


no image
Robust probabilistic superposition and comparison of protein structures

Mechelke, M., Habeck, M.

BMC Bioinformatics, 11(363):1-13, July 2010 (article)

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Results of the GREAT08 Challenge: An image analysis competition for cosmological lensing

Bridle, S., Balan, S., Bethge, M., Gentile, M., Harmeling, S., Heymans, C., Hirsch, M., Hosseini, R., Jarvis, M., Kirk, D., Kitching, T., Kuijken, K., Lewis, A., Paulin-Henriksson, S., Schölkopf, B., Velander, M., Voigt, L., Witherick, D., Amara, A., Bernstein, G., Courbin, F., Gill, M., Heavens, A., Mandelbaum, R., Massey, R., Moghaddam, B., Rassat, A., Refregier, A., Rhodes, J., Schrabback, T., Shawe-Taylor, J., Shmakova, M., van Waerbeke, L., Wittman, D.

Monthly Notices of the Royal Astronomical Society, 405(3):2044-2061, July 2010 (article)

Abstract
We present the results of the GREAT08 Challenge, a blind analysis challenge to infer weak gravitational lensing shear distortions from images. The primary goal was to stimulate new ideas by presenting the problem to researchers outside the shear measurement community. Six GREAT08 Team methods were presented at the launch of the Challenge and five additional groups submitted results during the 6 month competition. Participants analyzed 30 million simulated galaxies with a range in signal to noise ratio, point-spread function ellipticity, galaxy size, and galaxy type. The large quantity of simulations allowed shear measurement methods to be assessed at a level of accuracy suitable for currently planned future cosmic shear observations for the first time. Different methods perform well in different parts of simulation parameter space and come close to the target level of accuracy in several of these. A number of fresh ideas have emerged as a result of the Challenge including a re-examination of the process of combining information from different galaxies, which reduces the dependence on realistic galaxy modelling. The image simulations will become increasingly sophis- ticated in future GREAT challenges, meanwhile the GREAT08 simulations remain as a benchmark for additional developments in shear measurement algorithms.

Web DOI [BibTex]

Web DOI [BibTex]


no image
Remote Sensing Feature Selection by Kernel Dependence Estimation

Camps-Valls, G., Mooij, J., Schölkopf, B.

IEEE Geoscience and Remote Sensing Letters, 7(3):587-591, July 2010 (article)

Abstract
This letter introduces a nonlinear measure of independence between random variables for remote sensing supervised feature selection. The so-called Hilbert–Schmidt independence criterion (HSIC) is a kernel method for evaluating statistical dependence and it is based on computing the Hilbert–Schmidt norm of the cross-covariance operator of mapped samples in the corresponding Hilbert spaces. The HSIC empirical estimator is easy to compute and has good theoretical and practical properties. Rather than using this estimate for maximizing the dependence between the selected features and the class labels, we propose the more sensitive criterion of minimizing the associated HSIC p-value. Results in multispectral, hyperspectral, and SAR data feature selection for classification show the good performance of the proposed approach.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Clustering stability: an overview

von Luxburg, U.

Foundations and Trends in Machine Learning, 2(3):235-274, July 2010 (article)

Abstract
A popular method for selecting the number of clusters is based on stability arguments: one chooses the number of clusters such that the corresponding clustering results are "most stable". In recent years, a series of papers has analyzed the behavior of this method from a theoretical point of view. However, the results are very technical and difficult to interpret for non-experts. In this paper we give a high-level overview about the existing literature on clustering stability. In addition to presenting the results in a slightly informal but accessible way, we relate them to each other and discuss their different implications.

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Justifying Additive Noise Model-Based Causal Discovery via Algorithmic Information Theory

Janzing, D., Steudel, B.

Open Systems and Information Dynamics, 17(2):189-212, June 2010 (article)

Abstract
A recent method for causal discovery is in many cases able to infer whether X causes Y or Y causes X for just two observed variables X and Y. It is based on the observation that there exist (non-Gaussian) joint distributions P(X,Y) for which Y may be written as a function of X up to an additive noise term that is independent of X and no such model exists from Y to X. Whenever this is the case, one prefers the causal model X → Y. Here we justify this method by showing that the causal hypothesis Y → X is unlikely because it requires a specific tuning between P(Y) and P(X|Y) to generate a distribution that admits an additive noise model from X to Y. To quantify the amount of tuning, needed we derive lower bounds on the algorithmic information shared by P(Y) and P(X|Y). This way, our justification is consistent with recent approaches for using algorithmic information theory for causal reasoning. We extend this principle to the case where P(X,Y) almost admits an additive noise model. Our results suggest that the above conclusion is more reliable if the complexity of P(Y) is high.

PDF Web DOI [BibTex]