Sebastian Gerwinn
Position: PhD Student
Room no.: 1.B.05
Phone: +49 7071 601 1772

My research interests are in the area of Bayesian inference and computational neuroscience. In particular, I am interested in a characterization of the relationship between sensory signals and neural responses. The methods which I think are the most promising to tackle this kind of tasks are Bayesian methods.

The applicability of Bayesian methods is often limited by the fact that they are computational prohibitive. My main focus has therefore been to alleviate this problem by developing approximate methods which are then also feasible on a much larger scale and can therefore be applied to realistically sized data.

The main advantage of a Bayesian treatment lies in the explicit representation of the involved uncertainties. Having access to this kind of knowledge enables one to perform further analysis such as experimental design or model selection.



Stimulus Response Relationship

I have analyzed the relationship between stimuli and neural responses from three different perspectives: (1) the encoding, (2) the decoding and (3) the joint occurrence perspective.

In a first project I investigated the system identification task corresponding to the encoding direction of the stimulus response relationship. I developed an approximate Bayesian inference method which is feasible for models of generalized linear type, one of the most successful and commonly used generative models. As a result, we obtained not only particular point estimates of sets of parameters, but also model based confidence intervals, which in turn we used for feature selection and estimating the functional connectivity within populations of neurons.

Second, I analyzed the relationship from a decoding point of view. Here, using the leaky integrate and fire neuron model, I obtained a simple yet accurate decoding algorithm. Again, using a Bayesian treatment, it is possible to not only decode the most likely stimulus but also assigning to each stimulus the probability that it has caused the observed neural response.

Third, merging both perspectives, I looked at the joint occurrence of stimuli and neural responses. Using commonly used descriptive statistics such as spike-triggered average and spike-triggered covariance, I build a maximum entropy model. This model can then be used as a generative model as well as a decoding model exhibiting the same descriptive statistics as the observed ones, while assuming the least  additional constrains due to the maximum entropy property.

Unüberwachtes Lernen Steuerbarer Filter
Unüberwachtes Lernen Steuerbarer Filter
Obwohl sich die Pixel-Darstellung eines Bildes unter affinen Transformationen wie Translation,     Rotation und Skalierung stark ändert, bleibt der Inhalt des Bildes weitgehend unverändert. Insbesondere, wenn sich die Änderungen eines D-dimensionalen Lichtintensitätsvektors durch eine einparametrige Lie-Gruppe beschreiben lassen, ist es möglich, eine verlustfreie Bildrepräsentation zu finden, bei der eine Komponente dem Transformationsparameter entspricht und die anderen (D-1) Komponenten invariant sind unter der Lie-Gruppentransformation. Um solche Bildrepräsentationen abzuleiten, konstruieren wir geeignete generative Modelle, mit denen Steuerbare Filter auf unüberwachte Weise gelernt werden können. Insbesondere haben wir zeigen können, dass es möglich ist mit einer anti-symmetrischen Variante der Kanonischen Korrelationsanalyse (CCA), eine vollständige Basis für 32x32 Bildausschnitte zu bestimmen, die sich aus rotationsinvarianten Steuerbaren Filtern zusammensetzt.

Bayesian Models for Multi-Electrode Neuronal Spike Recordings

We investigate Bayesian methods to predict responses from multiple retinal and LGN ganglion cells, conditioned on visual stimuli.

Export search results as: [BibTex]

  • S. Gerwinn (2010). Bayesian Methods for Neural Data Analysis Eberhard Karls Universität Tübingen, Germany
Conference Papers
  • S. Gerwinn, J. Macke, M. Seeger, M. Bethge (2008). Bayesian Inference for Spiking Neuron Models with a Sparsity Prior In: Advances in neural information processing systems 20, (Ed) Platt, J. C., D. Koller, Y. Singer, S. Roweis, Advances in Neural Information Processing Systems 20: 21st Annual Conference on Neural Information Processing Systems 2007, Curran, Red Hook, NY, USA, 529-536, ISBN: 978-1-605-60352-0, Twenty-First Annual Conference on Neural Information Processing Systems (NIPS 2007)
Conference Papers
  • M. Seeger, S. Gerwinn, M. Bethge (2007). Bayesian Inference for Sparse Generalized Linear Models In: ECML 2007, (Ed) Kok, J. N., J. Koronacki, R. Lopez de Mantaras, S. Matwin, D. Mladenic, A. Skowron, Machine Learning: ECML 2007, Springer, Berlin, Germany, 298-309, 18th European Conference on Machine Learning
  • M. Bethge, S. Gerwinn, JH. Macke (2007). Unsupervised learning of a steerable basis for invariant image representations In: Human Vision and Electronic Imaging XII, (Ed) Rogowitz, B. E., Human Vision and Electronic Imaging XII: Proceedings of the SPIE Human Vision and Electronic Imaging Conference 2007, SPIE, Bellingham, WA, USA, 1-12, SPIE Human Vision and Electronic Imaging Conference 2007
  • M. Bethge, JH. Macke, S. Gerwinn, G. Zeck (2007). Identifying temporal population codes in the retina using canonical correlation analysis 31st Göttingen Neurobiology Conference, 31, 359
  • S. Gerwinn, M. Seeger, G. Zeck, M. Bethge (2007). Bayesian Neural System identification: error bars, receptive fields and neural couplings 31st Göttingen Neurobiology Conference, 31, 360