My research interests are in the area of Bayesian inference and computational neuroscience. In particular, I am interested in a characterization of the relationship between sensory signals and neural responses. The methods which I think are the most promising to tackle this kind of tasks are Bayesian methods.
The applicability of Bayesian methods is often limited by the fact that they are computational prohibitive. My main focus has therefore been to alleviate this problem by developing approximate methods which are then also feasible on a much larger scale and can therefore be applied to realistically sized data.
The main advantage of a Bayesian treatment lies in the explicit representation of the involved uncertainties. Having access to this kind of knowledge enables one to perform further analysis such as experimental design or model selection.
I have analyzed the relationship between stimuli and neural responses from three different perspectives: (1) the encoding, (2) the decoding and (3) the joint occurrence perspective.
In a first project I investigated the system identification task corresponding to the encoding direction of the stimulus response relationship. I developed an approximate Bayesian inference method which is feasible for models of generalized linear type, one of the most successful and commonly used generative models. As a result, we obtained not only particular point estimates of sets of parameters, but also model based confidence intervals, which in turn we used for feature selection and estimating the functional connectivity within populations of neurons.
Second, I analyzed the relationship from a decoding point of view. Here, using the leaky integrate and fire neuron model, I obtained a simple yet accurate decoding algorithm. Again, using a Bayesian treatment, it is possible to not only decode the most likely stimulus but also assigning to each stimulus the probability that it has caused the observed neural response.
Third, merging both perspectives, I looked at the joint occurrence of stimuli and neural responses. Using commonly used descriptive statistics such as spiketriggered average and spiketriggered covariance, I build a maximum entropy model. This model can then be used as a generative model as well as a decoding model exhibiting the same descriptive statistics as the observed ones, while assuming the least additional constrains due to the maximum entropy property.


Obwohl sich die PixelDarstellung eines Bildes unter affinen Transformationen wie Translation, Rotation und Skalierung stark ändert, bleibt der Inhalt des Bildes weitgehend unverändert. Insbesondere, wenn sich die Änderungen eines Ddimensionalen Lichtintensitätsvektors durch eine einparametrige LieGruppe beschreiben lassen, ist es möglich, eine verlustfreie Bildrepräsentation zu finden, bei der eine Komponente dem Transformationsparameter entspricht und die anderen (D1) Komponenten invariant sind unter der LieGruppentransformation. Um solche Bildrepräsentationen abzuleiten, konstruieren wir geeignete generative Modelle, mit denen Steuerbare Filter auf unüberwachte Weise gelernt werden können. Insbesondere haben wir zeigen können, dass es möglich ist mit einer antisymmetrischen Variante der Kanonischen Korrelationsanalyse (CCA), eine vollständige Basis für 32x32 Bildausschnitte zu bestimmen, die sich aus rotationsinvarianten Steuerbaren Filtern zusammensetzt. 





We investigate Bayesian methods to predict responses from multiple retinal and LGN ganglion cells, conditioned on visual stimuli. 


Gerwinn, S.
Bayesian Methods for Neural Data Analysis
Eberhard Karls Universität Tübingen, Germany, 2010 (phdthesis)
Gerwinn, S., Macke, J., Seeger, M., Bethge, M.
Bayesian Inference for Spiking Neuron Models with a Sparsity Prior
In Advances in neural information processing systems 20, pages: 529536, (Editors: Platt, J. C., D. Koller, Y. Singer, S. Roweis), Curran, Red Hook, NY, USA, TwentyFirst Annual Conference on Neural Information Processing Systems (NIPS), September 2008 (inproceedings)
Seeger, M., Gerwinn, S., Bethge, M.
Bayesian Inference for Sparse Generalized Linear Models
In ECML 2007, pages: 298309, Lecture Notes in Computer Science ; 4701, (Editors: Kok, J. N., J. Koronacki, R. Lopez de Mantaras, S. Matwin, D. Mladenic, A. Skowron), Springer, Berlin, Germany, 18th European Conference on Machine Learning, September 2007 (inproceedings)
Bethge, M., Gerwinn, S., Macke, J.
Unsupervised learning of a steerable basis for invariant image representations
In Human Vision and Electronic Imaging XII, pages: 112, (Editors: Rogowitz, B. E.), SPIE, Bellingham, WA, USA, SPIE Human Vision and Electronic Imaging Conference, February 2007 (inproceedings)
Bethge, M., Macke, J., Gerwinn, S., Zeck, G.
Identifying temporal population codes in the retina using canonical correlation analysis
31st G{\"o}ttingen Neurobiology Conference, 31, pages: 359, March 2007 (poster)
Gerwinn, S., Seeger, M., Zeck, G., Bethge, M.
Bayesian Neural System identification: error bars, receptive fields and neural couplings
31st G{\"o}ttingen Neurobiology Conference, 31, pages: 360, March 2007 (poster)
My research interests are in the area of Bayesian inference and computational neuroscience. In particular, I am interested in a characterization of the relationship between sensory signals and neural responses. The methods which I think are the most promising to tackle this kind of tasks are Bayesian methods.
The applicability of Bayesian methods is often limited by the fact that they are computational prohibitive. My main focus has therefore been to alleviate this problem by developing approximate methods which are then also feasible on a much larger scale and can therefore be applied to realistically sized data.
The main advantage of a Bayesian treatment lies in the explicit representation of the involved uncertainties. Having access to this kind of knowledge enables one to perform further analysis such as experimental design or model selection.
I have analyzed the relationship between stimuli and neural responses from three different perspectives: (1) the encoding, (2) the decoding and (3) the joint occurrence perspective.
In a first project I investigated the system identification task corresponding to the encoding direction of the stimulus response relationship. I developed an approximate Bayesian inference method which is feasible for models of generalized linear type, one of the most successful and commonly used generative models. As a result, we obtained not only particular point estimates of sets of parameters, but also model based confidence intervals, which in turn we used for feature selection and estimating the functional connectivity within populations of neurons.
Second, I analyzed the relationship from a decoding point of view. Here, using the leaky integrate and fire neuron model, I obtained a simple yet accurate decoding algorithm. Again, using a Bayesian treatment, it is possible to not only decode the most likely stimulus but also assigning to each stimulus the probability that it has caused the observed neural response.
Third, merging both perspectives, I looked at the joint occurrence of stimuli and neural responses. Using commonly used descriptive statistics such as spiketriggered average and spiketriggered covariance, I build a maximum entropy model. This model can then be used as a generative model as well as a decoding model exhibiting the same descriptive statistics as the observed ones, while assuming the least additional constrains due to the maximum entropy property.