Header logo is ei


2018


no image
Grip Stabilization of Novel Objects using Slip Prediction

Veiga, F., Peters, J., Hermans, T.

IEEE Transactions on Haptics, 2018 (article) In press

DOI Project Page [BibTex]

2018

DOI Project Page [BibTex]


no image
Domain Adaptation Under Causal Assumptions

Lechner, T.

Eberhard Karls Universität Tübingen, Germany, 2018 (mastersthesis)

[BibTex]

[BibTex]


no image
A Differentially Private Kernel Two-Sample Test

Raj*, A., Law*, L., Sejdinovic*, D., Park, M.

2018, *equal contribution (conference) Submitted

[BibTex]

[BibTex]


no image
Electrophysiological correlates of neurodegeneration in motor and non-motor brain regions in amyotrophic lateral sclerosis—implications for brain–computer interfacing

Kellmeyer, P., Grosse-Wentrup, M., Schulze-Bonhage, A., Ziemann, U., Ball, T.

Journal of Neural Engineering, 15(4):041003, IOP Publishing, 2018 (article)

link (url) [BibTex]

link (url) [BibTex]


no image
Quantum machine learning: a classical perspective

Ciliberto, C., Herbster, M., Ialongo, A. D., Pontil, M., Rocchetto, A., Severini, S., Wossnig, L.

Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 474(2209):20170551, 2018 (article)

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Maschinelles Lernen: Entwicklung ohne Grenzen?

Schökopf, B.

In Mit Optimismus in die Zukunft schauen. Künstliche Intelligenz - Chancen und Rahmenbedingungen, pages: 26-34, (Editors: Bender, G. and Herbrich, R. and Siebenhaar, K.), B&S Siebenhaar Verlag, 2018 (incollection)

[BibTex]

[BibTex]


no image
Kernel-based tests for joint independence

Pfister, N., Bühlmann, P., Schölkopf, B., Peters, J.

Journal of the Royal Statistical Society: Series B (Statistical Methodology), 80(1):5-31, 2018 (article)

DOI [BibTex]

DOI [BibTex]


no image
Prediction of Glucose Tolerance without an Oral Glucose Tolerance Test

Babbar, R., Heni, M., Peter, A., Hrabě de Angelis, M., Häring, H., Fritsche, A., Preissl, H., Schölkopf, B., Wagner, R.

Frontiers in Endocrinology, 9, pages: 82, 2018 (article)

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Invariant Models for Causal Transfer Learning

Rojas-Carulla, M., Schölkopf, B., Turner, R., Peters, J.

Journal of Machine Learning Research, 19(36):1-34, 2018 (article)

link (url) [BibTex]

link (url) [BibTex]


no image
MOABB: Trustworthy algorithm benchmarking for BCIs

Jayaram, V., Barachant, A.

Journal of Neural Engineering, 15(6):066011, 2018 (article)

link (url) DOI Project Page [BibTex]

link (url) DOI Project Page [BibTex]


no image
f-Divergence constrained policy improvement

Belousov, B., Peters, J.

Journal of Machine Learning Research, 2018 (article) Submitted

Project Page [BibTex]

Project Page [BibTex]


no image
Phylogenetic convolutional neural networks in metagenomics

Fioravanti*, D., Giarratano*, Y., Maggio*, V., Agostinelli, C., Chierici, M., Jurman, G., Furlanello, C.

BMC Bioinformatics, 19(2):49 pages, 2018, *equal contribution (article)

DOI [BibTex]

DOI [BibTex]


no image
Food specific inhibitory control under negative mood in binge-eating disorder: Evidence from a multimethod approach

Leehr, E. J., Schag, K., Dresler, T., Grosse-Wentrup, M., Hautzinger, M., Fallgatter, A. J., Zipfel, S., Giel, K. E., Ehlis, A.

International Journal of Eating Disorders, 51(2):112-123, Wiley Online Library, 2018 (article)

DOI [BibTex]

DOI [BibTex]


no image
Probabilistic Approaches to Stochastic Optimization

Mahsereci, M.

Eberhard Karls Universität Tübingen, Germany, 2018 (phdthesis)

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Reinforcement Learning for High-Speed Robotics with Muscular Actuation

Guist, S.

Ruprecht-Karls-Universität Heidelberg , 2018 (mastersthesis)

[BibTex]

[BibTex]


no image
Linking imaging to omics utilizing image-guided tissue extraction

Disselhorst, J. A., Krueger, M. A., Ud-Dean, S. M. M., Bezrukov, I., Jarboui, M. A., Trautwein, C., Traube, A., Spindler, C., Cotton, J. M., Leibfritz, D., Pichler, B. J.

Proceedings of the National Academy of Sciences, 115(13):E2980-E2987, 2018 (article)

DOI [BibTex]

DOI [BibTex]


no image
Methods in Psychophysics

Wichmann, F. A., Jäkel, F.

In Stevens’ Handbook of Experimental Psychology and Cognitive Neuroscience, 5 (Methodology), 7, 4th, John Wiley & Sons, Inc., 2018 (inbook)

[BibTex]

[BibTex]


no image
Discriminative Transfer Learning for General Image Restoration

Xiao, L., Heide, F., Heidrich, W., Schölkopf, B., Hirsch, M.

IEEE Transactions on Image Processing, 27(8):4091-4104, 2018 (article)

DOI [BibTex]

DOI [BibTex]


no image
Photorealistic Video Super Resolution

Pérez-Pellitero, E., Sajjadi, M. S. M., Hirsch, M., Schölkopf, B.

Workshop and Challenge on Perceptual Image Restoration and Manipulation (PIRM) at the 15th European Conference on Computer Vision (ECCV), 2018 (poster)

[BibTex]

[BibTex]


no image
Denotational Validation of Higher-order Bayesian Inference

Ścibior, A., Kammar, O., Vákár, M., Staton, S., Yang, H., Cai, Y., Ostermann, K., Moss, S. K., Heunen, C., Ghahramani, Z.

Proceedings of the ACM on Principles of Programming Languages (POPL), 2(Article No. 60):1-29, ACM, 2018 (conference)

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Dissecting the synapse- and frequency-dependent network mechanisms of in vivo hippocampal sharp wave-ripples

Ramirez-Villegas, J. F., Willeke, K. F., Logothetis, N. K., Besserve, M.

Neuron, 100(5):1224-1240, 2018 (article)

link (url) DOI Project Page [BibTex]

link (url) DOI Project Page [BibTex]


no image
Retinal image quality of the human eye across the visual field

Meding, K., Hirsch, M., Wichmann, F. A.

14th Biannual Conference of the German Society for Cognitive Science (KOGWIS 2018), 2018 (poster)

[BibTex]

[BibTex]


no image
In-Hand Object Stabilization by Independent Finger Control

Veiga, F. F., Edin, B. B., Peters, J.

IEEE Transactions on Robotics, 2018 (article) Submitted

Project Page [BibTex]

Project Page [BibTex]


no image
Visualizing and understanding Sum-Product Networks

Vergari, A., Di Mauro, N., Esposito, F.

Machine Learning, 2018 (article)

DOI [BibTex]

DOI [BibTex]


no image
Transfer Learning for BCIs

Jayaram, V., Fiebig, K., Peters, J., Grosse-Wentrup, M.

In Brain–Computer Interfaces Handbook, pages: 425-442, 22, (Editors: Chang S. Nam, Anton Nijholt and Fabien Lotte), CRC Press, 2018 (incollection)

Project Page [BibTex]

Project Page [BibTex]


no image
Probabilistic Ordinary Differential Equation Solvers — Theory and Applications

Schober, M.

Eberhard Karls Universität Tübingen, Germany, 2018 (phdthesis)

[BibTex]

[BibTex]


no image
A machine learning approach to taking EEG-based computer interfaces out of the lab

Jayaram, V.

Graduate Training Centre of Neuroscience, IMPRS, Eberhard Karls Universität Tübingen, Germany, 2018 (phdthesis)

[BibTex]

[BibTex]


no image
Non-Equilibrium Relations for Bounded Rational Decision-Making in Changing Environments

Grau-Moya, J, Krüger, M, Braun, DA

Entropy, 20(1:1):1-28, January 2018 (article)

Abstract
Living organisms from single cells to humans need to adapt continuously to respond to changes in their environment. The process of behavioural adaptation can be thought of as improving decision-making performance according to some utility function. Here, we consider an abstract model of organisms as decision-makers with limited information-processing resources that trade off between maximization of utility and computational costs measured by a relative entropy, in a similar fashion to thermodynamic systems undergoing isothermal transformations. Such systems minimize the free energy to reach equilibrium states that balance internal energy and entropic cost. When there is a fast change in the environment, these systems evolve in a non-equilibrium fashion because they are unable to follow the path of equilibrium distributions. Here, we apply concepts from non-equilibrium thermodynamics to characterize decision-makers that adapt to changing environments under the assumption that the temporal evolution of the utility function is externally driven and does not depend on the decision-maker’s action. This allows one to quantify performance loss due to imperfect adaptation in a general manner and, additionally, to find relations for decision-making similar to Crooks’ fluctuation theorem and Jarzynski’s equality. We provide simulations of several exemplary decision and inference problems in the discrete and continuous domains to illustrate the new relations.

DOI [BibTex]

DOI [BibTex]

2013


no image
Camera-specific Image Denoising

Schober, M.

Eberhard Karls Universität Tübingen, Germany, October 2013 (diplomathesis)

PDF [BibTex]

2013

PDF [BibTex]


no image
Studying large-scale brain networks: electrical stimulation and neural-event-triggered fMRI

Logothetis, N., Eschenko, O., Murayama, Y., Augath, M., Steudel, T., Evrard, H., Besserve, M., Oeltermann, A.

Twenty-Second Annual Computational Neuroscience Meeting (CNS*2013), July 2013, journal = {BMC Neuroscience}, year = {2013}, month = {7}, volume = {14}, number = {Supplement 1}, pages = {A1}, (talk)

Web [BibTex]

Web [BibTex]


no image
Correlation of Simultaneously Acquired Diffusion-Weighted Imaging and 2-Deoxy-[18F] fluoro-2-D-glucose Positron Emission Tomography of Pulmonary Lesions in a Dedicated Whole-Body Magnetic Resonance/Positron Emission Tomography System

Schmidt, H., Brendle, C., Schraml, C., Martirosian, P., Bezrukov, I., Hetzel, J., Müller, M., Sauter, A., Claussen, C., Pfannenberg, C., Schwenzer, N.

Investigative Radiology, 48(5):247-255, May 2013 (article)

Web [BibTex]

Web [BibTex]


no image
Replacing Causal Faithfulness with Algorithmic Independence of Conditionals

Lemeire, J., Janzing, D.

Minds and Machines, 23(2):227-249, May 2013 (article)

Abstract
Independence of Conditionals (IC) has recently been proposed as a basic rule for causal structure learning. If a Bayesian network represents the causal structure, its Conditional Probability Distributions (CPDs) should be algorithmically independent. In this paper we compare IC with causal faithfulness (FF), stating that only those conditional independences that are implied by the causal Markov condition hold true. The latter is a basic postulate in common approaches to causal structure learning. The common spirit of FF and IC is to reject causal graphs for which the joint distribution looks ‘non-generic’. The difference lies in the notion of genericity: FF sometimes rejects models just because one of the CPDs is simple, for instance if the CPD describes a deterministic relation. IC does not behave in this undesirable way. It only rejects a model when there is a non-generic relation between different CPDs although each CPD looks generic when considered separately. Moreover, it detects relations between CPDs that cannot be captured by conditional independences. IC therefore helps in distinguishing causal graphs that induce the same conditional independences (i.e., they belong to the same Markov equivalence class). The usual justification for FF implicitly assumes a prior that is a probability density on the parameter space. IC can be justified by Solomonoff’s universal prior, assigning non-zero probability to those points in parameter space that have a finite description. In this way, it favours simple CPDs, and therefore respects Occam’s razor. Since Kolmogorov complexity is uncomputable, IC is not directly applicable in practice. We argue that it is nevertheless helpful, since it has already served as inspiration and justification for novel causal inference algorithms.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
What can neurons do for their brain? Communicate selectivity with bursts

Balduzzi, D., Tononi, G.

Theory in Biosciences , 132(1):27-39, Springer, March 2013 (article)

Abstract
Neurons deep in cortex interact with the environment extremely indirectly; the spikes they receive and produce are pre- and post-processed by millions of other neurons. This paper proposes two information-theoretic constraints guiding the production of spikes, that help ensure bursting activity deep in cortex relates meaningfully to events in the environment. First, neurons should emphasize selective responses with bursts. Second, neurons should propagate selective inputs by burst-firing in response to them. We show the constraints are necessary for bursts to dominate information-transfer within cortex, thereby providing a substrate allowing neurons to distribute credit amongst themselves. Finally, since synaptic plasticity degrades the ability of neurons to burst selectively, we argue that homeostatic regulation of synaptic weights is necessary, and that it is best performed offline during sleep.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Apprenticeship Learning with Few Examples

Boularias, A., Chaib-draa, B.

Neurocomputing, 104, pages: 83-96, March 2013 (article)

Abstract
We consider the problem of imitation learning when the examples, provided by an expert human, are scarce. Apprenticeship learning via inverse reinforcement learning provides an efficient tool for generalizing the examples, based on the assumption that the expert's policy maximizes a value function, which is a linear combination of state and action features. Most apprenticeship learning algorithms use only simple empirical averages of the features in the demonstrations as a statistics of the expert's policy. However, this method is efficient only when the number of examples is sufficiently large to cover most of the states, or the dynamics of the system is nearly deterministic. In this paper, we show that the quality of the learned policies is sensitive to the error in estimating the averages of the features when the dynamics of the system is stochastic. To reduce this error, we introduce two new approaches for bootstrapping the demonstrations by assuming that the expert is near-optimal and the dynamics of the system is known. In the first approach, the expert's examples are used to learn a reward function and to generate furthermore examples from the corresponding optimal policy. The second approach uses a transfer technique, known as graph homomorphism, in order to generalize the expert's actions to unvisited regions of the state space. Empirical results on simulated robot navigation problems show that our approach is able to learn sufficiently good policies from a significantly small number of examples.

Web DOI [BibTex]

Web DOI [BibTex]


Thumb xl thumb hennigk2012 2
Quasi-Newton Methods: A New Direction

Hennig, P., Kiefel, M.

Journal of Machine Learning Research, 14(1):843-865, March 2013 (article)

Abstract
Four decades after their invention, quasi-Newton methods are still state of the art in unconstrained numerical optimization. Although not usually interpreted thus, these are learning algorithms that fit a local quadratic approximation to the objective function. We show that many, including the most popular, quasi-Newton methods can be interpreted as approximations of Bayesian linear regression under varying prior assumptions. This new notion elucidates some shortcomings of classical algorithms, and lights the way to a novel nonparametric quasi-Newton method, which is able to make more efficient use of available information at computational cost similar to its predecessors.

website+code pdf link (url) [BibTex]

website+code pdf link (url) [BibTex]


no image
Regional effects of magnetization dispersion on quantitative perfusion imaging for pulsed and continuous arterial spin labeling

Cavusoglu, M., Pohmann, R., Burger, H. C., Uludag, K.

Magnetic Resonance in Medicine, 69(2):524-530, Febuary 2013 (article)

Abstract
Most experiments assume a global transit delay time with blood flowing from the tagging region to the imaging slice in plug flow without any dispersion of the magnetization. However, because of cardiac pulsation, nonuniform cross-sectional flow profile, and complex vessel networks, the transit delay time is not a single value but follows a distribution. In this study, we explored the regional effects of magnetization dispersion on quantitative perfusion imaging for varying transit times within a very large interval from the direct comparison of pulsed, pseudo-continuous, and dual-coil continuous arterial spin labeling encoding schemes. Longer distances between tagging and imaging region typically used for continuous tagging schemes enhance the regional bias on the quantitative cerebral blood flow measurement causing an underestimation up to 37% when plug flow is assumed as in the standard model.

Web DOI [BibTex]

Web DOI [BibTex]


no image
The multivariate Watson distribution: Maximum-likelihood estimation and other aspects

Sra, S., Karp, D.

Journal of Multivariate Analysis, 114, pages: 256-269, February 2013 (article)

Abstract
This paper studies fundamental aspects of modelling data using multivariate Watson distributions. Although these distributions are natural for modelling axially symmetric data (i.e., unit vectors where View the MathML source are equivalent), for high-dimensions using them can be difficult—largely because for Watson distributions even basic tasks such as maximum-likelihood are numerically challenging. To tackle the numerical difficulties some approximations have been derived. But these are either grossly inaccurate in high-dimensions [K.V. Mardia, P. Jupp, Directional Statistics, second ed., John Wiley & Sons, 2000] or when reasonably accurate [A. Bijral, M. Breitenbach, G.Z. Grudic, Mixture of Watson distributions: a generative model for hyperspherical embeddings, in: Artificial Intelligence and Statistics, AISTATS 2007, 2007, pp. 35–42], they lack theoretical justification. We derive new approximations to the maximum-likelihood estimates; our approximations are theoretically well-defined, numerically accurate, and easy to compute. We build on our parameter estimation and discuss mixture-modelling with Watson distributions; here we uncover a hitherto unknown connection to the “diametrical clustering” algorithm of Dhillon et al. [I.S. Dhillon, E.M. Marcotte, U. Roshan, Diametrical clustering for identifying anticorrelated gene clusters, Bioinformatics 19 (13) (2003) 1612–1619].

Web DOI [BibTex]

Web DOI [BibTex]


no image
How the result of graph clustering methods depends on the construction of the graph

Maier, M., von Luxburg, U., Hein, M.

ESAIM: Probability & Statistics, 17, pages: 370-418, January 2013 (article)

Abstract
We study the scenario of graph-based clustering algorithms such as spectral clustering. Given a set of data points, one rst has to construct a graph on the data points and then apply a graph clustering algorithm to nd a suitable partition of the graph. Our main question is if and how the construction of the graph (choice of the graph, choice of parameters, choice of weights) in uences the outcome of the nal clustering result. To this end we study the convergence of cluster quality measures such as the normalized cut or the Cheeger cut on various kinds of random geometric graphs as the sample size tends to in nity. It turns out that the limit values of the same objective function are systematically di erent on di erent types of graphs. This implies that clustering results systematically depend on the graph and can be very di erent for di erent types of graph. We provide examples to illustrate the implications on spectral clustering.

PDF DOI [BibTex]


no image
Falsification and future performance

Balduzzi, D.

In Algorithmic Probability and Friends. Bayesian Prediction and Artificial Intelligence, 7070, pages: 65-78, Lecture Notes in Computer Science, Springer, Berlin, Germany, Solomonoff 85th Memorial Conference, January 2013 (inproceedings)

Abstract
We information-theoretically reformulate two measures of capacity from statistical learning theory: empirical VC-entropy and empirical Rademacher complexity. We show these capacity measures count the number of hypotheses about a dataset that a learning algorithm falsifies when it finds the classifier in its repertoire minimizing empirical risk. It then follows from that the future performance of predictors on unseen data is controlled in part by how many hypotheses the learner falsifies. As a corollary we show that empirical VC-entropy quantifies the message length of the true hypothesis in the optimal code of a particular probability distribution, the so-called actual repertoire.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Explicit eigenvalues of certain scaled trigonometric matrices

Sra, S.

Linear Algebra and its Applications, 438(1):173-181, January 2013 (article)

DOI [BibTex]

DOI [BibTex]


no image
How Sensitive Is the Human Visual System to the Local Statistics of Natural Images?

Gerhard, H., Wichmann, F., Bethge, M.

PLoS Computational Biology, 9(1):e1002873, January 2013 (article)

Abstract
Several aspects of primate visual physiology have been identified as adaptations to local regularities of natural images. However, much less work has measured visual sensitivity to local natural image regularities. Most previous work focuses on global perception of large images and shows that observers are more sensitive to visual information when image properties resemble those of natural images. In this work we measure human sensitivity to local natural image regularities using stimuli generated by patch-based probabilistic natural image models that have been related to primate visual physiology. We find that human observers can learn to discriminate the statistical regularities of natural image patches from those represented by current natural image models after very few exposures and that discriminability depends on the degree of regularities captured by the model. The quick learning we observed suggests that the human visual system is biased for processing natural images, even at very fine spatial scales, and that it has a surprisingly large knowledge of the regularities in natural images, at least in comparison to the state-of-the-art statistical models of natural images.

DOI [BibTex]

DOI [BibTex]


no image
A neural population model for visual pattern detection

Goris, R., Putzeys, T., Wagemans, J., Wichmann, F.

Psychological Review, 120(3):472–496, 2013 (article)

DOI [BibTex]

DOI [BibTex]


no image
Feedback Error Learning for Rhythmic Motor Primitives

Gopalan, N., Deisenroth, M., Peters, J.

In Proceedings of 2013 IEEE International Conference on Robotics and Automation (ICRA 2013), pages: 1317-1322, 2013 (inproceedings)

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Gaussian Process Vine Copulas for Multivariate Dependence

Lopez-Paz, D., Hernandez-Lobato, J., Ghahramani, Z.

In Proceedings of the 30th International Conference on Machine Learning, W&CP 28(2), pages: 10-18, (Editors: S Dasgupta and D McAllester), JMLR, ICML, 2013, Poster: http://people.tuebingen.mpg.de/dlopez/papers/icml2013_gpvine_poster.pdf (inproceedings)

PDF Web [BibTex]

PDF Web [BibTex]


no image
A Review of Performance Variations in SMR-Based Brain–Computer Interfaces (BCIs)

Grosse-Wentrup, M., Schölkopf, B.

In Brain-Computer Interface Research, pages: 39-51, 4, SpringerBriefs in Electrical and Computer Engineering, (Editors: Guger, C., Allison, B. Z. and Edlinger, G.), Springer, 2013 (inbook)

PDF DOI [BibTex]

PDF DOI [BibTex]