Header logo is ei


2013


no image
Detection and attribution of large spatiotemporal extreme events in Earth observation data

Zscheischler, J., Mahecha, M., Harmeling, S., Reichstein, M.

Ecological Informatics, 15, pages: 66-73, 2013 (article)

Abstract
Latest climate projections suggest that both frequency and intensity of climate extremes will be substantially modified over the course of the coming decades. As a consequence, we need to understand to what extent and via which pathways climate extremes affect the state and functionality of terrestrial ecosystems and the associated biogeochemical cycles on a global scale. So far the impacts of climate extremes on the terrestrial biosphere were mainly investigated on the basis of case studies, while global assessments are widely lacking. In order to facilitate global analysis of this kind, we present a methodological framework that firstly detects spatiotemporally contiguous extremes in Earth observations, and secondly infers the likely pathway of the preceding climate anomaly. The approach does not require long time series, is computationally fast, and easily applicable to a variety of data sets with different spatial and temporal resolutions. The key element of our analysis strategy is to directly search in the relevant observations for spatiotemporally connected components exceeding a certain percentile threshold. We also put an emphasis on characterization of extreme event distribution, and scrutinize the attribution issue. We exemplify the analysis strategy by exploring the fraction of absorbed photosynthetically active radiation (fAPAR) from 1982 to 2011. Our results suggest that the hot spots of extremes in fAPAR lie in Northeastern Brazil, Southeastern Australia, Kenya and Tanzania. Moreover, we demonstrate that the size distribution of extremes follow a distinct power law. The attribution framework reveals that extremes in fAPAR are primarily driven by phases of water scarcity.

Web DOI [BibTex]

2013

Web DOI [BibTex]


no image
Simultaneous PET/MR reveals Brain Function in Activated and Resting State on Metabolic, Hemodynamic and Multiple Temporal Scales

Wehrl, H., Hossain, M., Lankes, K., Liu, C., Bezrukov, I., Martirosian, P., Schick, F., Reischl, G., Pichler, B.

Nature Medicine, 19, pages: 1184–1189, 2013 (article)

Abstract
Combined positron emission tomography (PET) and magnetic resonance imaging (MRI) is a new tool to study functional processes in the brain. Here we study brain function in response to a barrel-field stimulus simultaneously using PET, which traces changes in glucose metabolism on a slow time scale, and functional MRI (fMRI), which assesses fast vascular and oxygenation changes during activation. We found spatial and quantitative discrepancies between the PET and the fMRI activation data. The functional connectivity of the rat brain was assessed by both modalities: the fMRI approach determined a total of nine known neural networks, whereas the PET method identified seven glucose metabolism–related networks. These results demonstrate the feasibility of combined PET-MRI for the simultaneous study of the brain at activation and rest, revealing comprehensive and complementary information to further decode brain function and brain networks.

Web DOI [BibTex]

Web DOI [BibTex]


no image
A Guided Hybrid Genetic Algorithm for Feature Selection with Expensive Cost Functions

Jung, M., Zscheischler, J.

In Proceedings of the International Conference on Computational Science, 18, pages: 2337 - 2346, Procedia Computer Science, (Editors: Alexandrov, V and Lees, M and Krzhizhanovskaya, V and Dongarra, J and Sloot, PMA), Elsevier, Amsterdam, Netherlands, ICCS, 2013 (inproceedings)

Web DOI [BibTex]

Web DOI [BibTex]


no image
Domain Generalization via Invariant Feature Representation

Muandet, K.

30th International Conference on Machine Learning (ICML2013), 2013 (talk)

PDF [BibTex]

PDF [BibTex]


no image
Finding Potential Support Vectors in Separable Classification Problems

Varagnolo, D., Del Favero, S., Dinuzzo, F., Schenato, L., Pillonetto, G.

IEEE Transactions on Neural Networks and Learning Systems, 24(11):1799-1813, 2013 (article)

DOI [BibTex]

DOI [BibTex]


no image
Learning responsive robot behavior by imitation

Ben Amor, H., Vogt, D., Ewerton, M., Berger, E., Jung, B., Peters, J.

In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2013), pages: 3257-3264, IEEE, 2013 (inproceedings)

DOI [BibTex]

DOI [BibTex]


no image
Learning Skills with Motor Primitives

Peters, J., Kober, J., Mülling, K., Kroemer, O., Neumann, G.

In Proceedings of the 16th Yale Workshop on Adaptive and Learning Systems, 2013 (inproceedings)

[BibTex]

[BibTex]


no image
Scalable Influence Estimation in Continuous-Time Diffusion Networks

Du, N., Song, L., Gomez Rodriguez, M., Zha, H.

In Advances in Neural Information Processing Systems 26, pages: 3147-3155, (Editors: C.J.C. Burges and L. Bottou and M. Welling and Z. Ghahramani and K.Q. Weinberger), 27th Annual Conference on Neural Information Processing Systems (NIPS), 2013 (inproceedings)

PDF PDF [BibTex]

PDF PDF [BibTex]


no image
Rapid Distance-Based Outlier Detection via Sampling

Sugiyama, M., Borgwardt, KM.

In Advances in Neural Information Processing Systems 26, pages: 467-475, (Editors: C.J.C. Burges and L. Bottou and M. Welling and Z. Ghahramani and K.Q. Weinberger), 27th Annual Conference on Neural Information Processing Systems (NIPS), 2013 (inproceedings)

PDF [BibTex]

PDF [BibTex]


no image
Probabilistic Movement Primitives

Paraschos, A., Daniel, C., Peters, J., Neumann, G.

In Advances in Neural Information Processing Systems 26, pages: 2616-2624, (Editors: C.J.C. Burges and L. Bottou and M. Welling and Z. Ghahramani and K.Q. Weinberger), 27th Annual Conference on Neural Information Processing Systems (NIPS), 2013 (inproceedings)

PDF PDF [BibTex]

PDF PDF [BibTex]


no image
Causal Inference on Time Series using Restricted Structural Equation Models

Peters, J., Janzing, D., Schölkopf, B.

In Advances in Neural Information Processing Systems 26, pages: 154-162, (Editors: C.J.C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K.Q. Weinberger), 27th Annual Conference on Neural Information Processing Systems (NIPS), 2013 (inproceedings)

PDF [BibTex]

PDF [BibTex]


no image
Regression-tree Tuning in a Streaming Setting

Kpotufe, S., Orabona, F.

In Advances in Neural Information Processing Systems 26, pages: 1788-1796, (Editors: C.J.C. Burges and L. Bottou and M. Welling and Z. Ghahramani and K.Q. Weinberger), 27th Annual Conference on Neural Information Processing Systems (NIPS), 2013 (inproceedings)

PDF [BibTex]

PDF [BibTex]


no image
Density estimation from unweighted k-nearest neighbor graphs: a roadmap

von Luxburg, U., Alamgir, M.

In Advances in Neural Information Processing Systems 26, pages: 225-233, (Editors: C.J.C. Burges and L. Bottou and M. Welling and Z. Ghahramani and K.Q. Weinberger), 27th Annual Conference on Neural Information Processing Systems (NIPS), 2013 (inproceedings)

PDF [BibTex]

PDF [BibTex]


no image
Open-Box Spectral Clustering: Applications to Medical Image Analysis

Schultz, T., Kindlmann, G.

IEEE Transactions on Visualization and Computer Graphics, 19(12):2100-2108, 2013 (article)

DOI [BibTex]

DOI [BibTex]


no image
im3shape: a maximum likelihood galaxy shear measurement code for cosmic gravitational lensing

Zuntz, J., Kacprzak, T., Voigt, L., Hirsch, M., Rowe, B., Bridle, S.

Monthly Notices of the Royal Astronomical Society, 434(2):1604-1618, Oxford University Press, 2013 (article)

DOI [BibTex]

DOI [BibTex]


no image
Linear mixed models for genome-wide association studies

Lippert, C.

University of Tübingen, Germany, 2013 (phdthesis)

[BibTex]

[BibTex]


no image
Maximizing Kepler science return per telemetered pixel: Detailed models of the focal plane in the two-wheel era

Hogg, D. W., Angus, R., Barclay, T., Dawson, R., Fergus, R., Foreman-Mackey, D., Harmeling, S., Hirsch, M., Lang, D., Montet, B. T., Schiminovich, D., Schölkopf, B.

arXiv:1309.0653, 2013 (techreport)

link (url) [BibTex]

link (url) [BibTex]


no image
Accurate detection of differential RNA processing

Drewe, P., Stegle, O., Hartmann, L., Kahles, A., Bohnert, R., Wachter, A., Borgwardt, K. M., Rätsch, G.

Nucleic Acids Research, 41(10):5189-5198, 2013 (article)

DOI [BibTex]

DOI [BibTex]


no image
Maximizing Kepler science return per telemetered pixel: Searching the habitable zones of the brightest stars

Montet, B. T., Angus, R., Barclay, T., Dawson, R., Fergus, R., Foreman-Mackey, D., Harmeling, S., Hirsch, M., Hogg, D. W., Lang, D., Schiminovich, D., Schölkopf, B.

arXiv:1309.0654, 2013 (techreport)

link (url) [BibTex]

link (url) [BibTex]


no image
Detecting regulatory gene–environment interactions with unmeasured environmental factors

Fusi, N., Lippert, C., Borgwardt, K. M., Lawrence, N. D., Stegle, O.

Bioinformatics, 29(11):1382-1389, 2013 (article)

DOI [BibTex]

DOI [BibTex]


no image
On the Relations and Differences between Popper Dimension, Exclusion Dimension and VC-Dimension

Seldin, Y., Schölkopf, B.

In Empirical Inference - Festschrift in Honor of Vladimir N. Vapnik, pages: 53-57, 6, (Editors: Schölkopf, B., Luo, Z. and Vovk, V.), Springer, 2013 (inbook)

[BibTex]

[BibTex]


no image
Modeling and Learning Complex Motor Tasks: A case study on Robot Table Tennis

Mülling, K.

Technical University Darmstadt, Germany, 2013 (phdthesis)

[BibTex]


no image
Fragmentation of Slow Wave Sleep after Onset of Complete Locked-In State

Soekadar, S. R., Born, J., Birbaumer, N., Bensch, M., Halder, S., Murguialday, A. R., Gharabaghi, A., Nijboer, F., Schölkopf, B., Martens, S.

Journal of Clinical Sleep Medicine, 9(9):951-953, 2013 (article)

DOI [BibTex]

DOI [BibTex]


no image
Automatic Malaria Diagnosis system

Mehrjou, A., Abbasian, T., Izadi, M.

In First RSI/ISM International Conference on Robotics and Mechatronics (ICRoM), pages: 205-211, 2013 (inproceedings)

DOI [BibTex]

DOI [BibTex]


no image
Structural learning

Braun, D

Scholarpedia, 8(10):12312, October 2013 (article)

Abstract
Structural learning in motor control refers to a metalearning process whereby an agent extracts (abstract) invariants from its sensorimotor stream when experiencing a range of environments that share similar structure. Such invariants can then be exploited for faster generalization and learning-to-learn when experiencing novel, but related task environments.

DOI [BibTex]

DOI [BibTex]


no image
The effect of model uncertainty on cooperation in sensorimotor interactions

Grau-Moya, J, Hez, E, Pezzulo, G, Braun, DA

Journal of the Royal Society Interface, 10(87):1-11, October 2013 (article)

Abstract
Decision-makers have been shown to rely on probabilistic models for perception and action. However, these models can be incorrect or partially wrong in which case the decision-maker has to cope with model uncertainty. Model uncertainty has recently also been shown to be an important determinant of sensorimotor behaviour in humans that can lead to risk-sensitive deviations from Bayes optimal behaviour towards worst-case or best-case outcomes. Here, we investigate the effect of model uncertainty on cooperation in sensorimotor interactions similar to the stag-hunt game, where players develop models about the other player and decide between a pay-off-dominant cooperative solution and a risk-dominant, non-cooperative solution. In simulations, we show that players who allow for optimistic deviations from their opponent model are much more likely to converge to cooperative outcomes. We also implemented this agent model in a virtual reality environment, and let human subjects play against a virtual player. In this game, subjects' pay-offs were experienced as forces opposing their movements. During the experiment, we manipulated the risk sensitivity of the computer player and observed human responses. We found not only that humans adaptively changed their level of cooperation depending on the risk sensitivity of the computer player but also that their initial play exhibited characteristic risk-sensitive biases. Our results suggest that model uncertainty is an important determinant of cooperation in two-player sensorimotor interactions.

DOI [BibTex]

DOI [BibTex]


no image
Thermodynamics as a theory of decision-making with information-processing costs

Ortega, PA, Braun, DA

Proceedings of the Royal Society of London A, 469(2153):1-18, May 2013 (article)

Abstract
Perfectly rational decision-makers maximize expected utility, but crucially ignore the resource costs incurred when determining optimal actions. Here, we propose a thermodynamically inspired formalization of bounded rational decision-making where information processing is modelled as state changes in thermodynamic systems that can be quantified by differences in free energy. By optimizing a free energy, bounded rational decision-makers trade off expected utility gains and information-processing costs measured by the relative entropy. As a result, the bounded rational decision-making problem can be rephrased in terms of well-known variational principles from statistical physics. In the limit when computational costs are ignored, the maximum expected utility principle is recovered. We discuss links to existing decision-making frameworks and applications to human decision-making experiments that are at odds with expected utility theory. Since most of the mathematical machinery can be borrowed from statistical physics, the main contribution is to re-interpret the formalism of thermodynamic free-energy differences in terms of bounded rational decision-making and to discuss its relationship to human decision-making experiments.

DOI [BibTex]

DOI [BibTex]


no image
Abstraction in Decision-Makers with Limited Information Processing Capabilities

Genewein, T, Braun, DA

pages: 1-9, NIPS Workshop Planning with Information Constraints for Control, Reinforcement Learning, Computational Neuroscience, Robotics and Games, December 2013 (conference)

Abstract
A distinctive property of human and animal intelligence is the ability to form abstractions by neglecting irrelevant information which allows to separate structure from noise. From an information theoretic point of view abstractions are desirable because they allow for very efficient information processing. In artificial systems abstractions are often implemented through computationally costly formations of groups or clusters. In this work we establish the relation between the free-energy framework for decision-making and rate-distortion theory and demonstrate how the application of rate-distortion for decision-making leads to the emergence of abstractions. We argue that abstractions are induced due to a limit in information processing capacity.

link (url) [BibTex]

link (url) [BibTex]


no image
Bounded Rational Decision-Making in Changing Environments

Grau-Moya, J, Braun, DA

pages: 1-9, NIPS Workshop Planning with Information Constraints for Control, Reinforcement Learning, Computational Neuroscience, Robotics and Games, December 2013 (conference)

Abstract
A perfectly rational decision-maker chooses the best action with the highest utility gain from a set of possible actions. The optimality principles that describe such decision processes do not take into account the computational costs of finding the optimal action. Bounded rational decision-making addresses this problem by specifically trading off information-processing costs and expected utility. Interestingly, a similar trade-off between energy and entropy arises when describing changes in thermodynamic systems. This similarity has been recently used to describe bounded rational agents. Crucially, this framework assumes that the environment does not change while the decision-maker is computing the optimal policy. When this requirement is not fulfilled, the decision-maker will suffer inefficiencies in utility, that arise because the current policy is optimal for an environment in the past. Here we borrow concepts from non-equilibrium thermodynamics to quantify these inefficiencies and illustrate with simulations its relationship with computational resources.

link (url) [BibTex]

link (url) [BibTex]

2006


no image
Conformal Multi-Instance Kernels

Blaschko, M., Hofmann, T.

In NIPS 2006 Workshop on Learning to Compare Examples, pages: 1-6, NIPS Workshop on Learning to Compare Examples, December 2006 (inproceedings)

Abstract
In the multiple instance learning setting, each observation is a bag of feature vectors of which one or more vectors indicates membership in a class. The primary task is to identify if any vectors in the bag indicate class membership while ignoring vectors that do not. We describe here a kernel-based technique that defines a parametric family of kernels via conformal transformations and jointly learns a discriminant function over bags together with the optimal parameter settings of the kernel. Learning a conformal transformation effectively amounts to weighting regions in the feature space according to their contribution to classification accuracy; regions that are discriminative will be weighted higher than regions that are not. This allows the classifier to focus on regions contributing to classification accuracy while ignoring regions that correspond to vectors found both in positive and in negative bags. We show how parameters of this transformation can be learned for support vector machines by posing the problem as a multiple kernel learning problem. The resulting multiple instance classifier gives competitive accuracy for several multi-instance benchmark datasets from different domains.

PDF Web [BibTex]

2006

PDF Web [BibTex]


no image
Some observations on the pedestal effect or dipper function

Henning, B., Wichmann, F.

Journal of Vision, 6(13):50, 2006 Fall Vision Meeting of the Optical Society of America, December 2006 (poster)

Abstract
The pedestal effect is the large improvement in the detectabilty of a sinusoidal “signal” grating observed when the signal is added to a masking or “pedestal” grating of the same spatial frequency, orientation, and phase. We measured the pedestal effect in both broadband and notched noise - noise from which a 1.5-octave band centred on the signal frequency had been removed. Although the pedestal effect persists in broadband noise, it almost disappears in the notched noise. Furthermore, the pedestal effect is substantial when either high- or low-pass masking noise is used. We conclude that the pedestal effect in the absence of notched noise results principally from the use of information derived from channels with peak sensitivities at spatial frequencies different from that of the signal and pedestal. The spatial-frequency components of the notched noise above and below the spatial frequency of the signal and pedestal prevent the use of information about changes in contrast carried in channels tuned to spatial frequencies that are very much different from that of the signal and pedestal. Thus the pedestal or dipper effect measured without notched noise is not a characteristic of individual spatial-frequency tuned channels.

Web DOI [BibTex]

Web DOI [BibTex]


no image
A Kernel Method for the Two-Sample-Problem

Gretton, A., Borgwardt, K., Rasch, M., Schölkopf, B., Smola, A.

20th Annual Conference on Neural Information Processing Systems (NIPS), December 2006 (talk)

Abstract
We propose two statistical tests to determine if two samples are from different distributions. Our test statistic is in both cases the distance between the means of the two samples mapped into a reproducing kernel Hilbert space (RKHS). The first test is based on a large deviation bound for the test statistic, while the second is based on the asymptotic distribution of this statistic. We show that the test statistic can be computed in $O(m^2)$ time. We apply our approach to a variety of problems, including attribute matching for databases using the Hungarian marriage method, where our test performs strongly. We also demonstrate excellent performance when comparing distributions over graphs, for which no alternative tests currently exist.

PDF [BibTex]

PDF [BibTex]


no image
Ab-initio gene finding using machine learning

Schweikert, G., Zeller, G., Zien, A., Ong, C., de Bona, F., Sonnenburg, S., Phillips, P., Rätsch, G.

NIPS Workshop on New Problems and Methods in Computational Biology, December 2006 (talk)

Web [BibTex]

Web [BibTex]


no image
Graph boosting for molecular QSAR analysis

Saigo, H., Kadowaki, T., Kudo, T., Tsuda, K.

NIPS Workshop on New Problems and Methods in Computational Biology, December 2006 (talk)

Abstract
We propose a new boosting method that systematically combines graph mining and mathematical programming-based machine learning. Informative and interpretable subgraph features are greedily found by a series of graph mining calls. Due to our mathematical programming formulation, subgraph features and pre-calculated real-valued features are seemlessly integrated. We tested our algorithm on a quantitative structure-activity relationship (QSAR) problem, which is basically a regression problem when given a set of chemical compounds. In benchmark experiments, the prediction accuracy of our method favorably compared with the best results reported on each dataset.

Web [BibTex]

Web [BibTex]


no image
Inferring Causal Directions by Evaluating the Complexity of Conditional Distributions

Sun, X., Janzing, D., Schölkopf, B.

NIPS Workshop on Causality and Feature Selection, December 2006 (talk)

Abstract
We propose a new approach to infer the causal structure that has generated the observed statistical dependences among n random variables. The idea is that the factorization of the joint measure of cause and effect into P(cause)P(effect|cause) leads typically to simpler conditionals than non-causal factorizations. To evaluate the complexity of the conditionals we have tried two methods. First, we have compared them to those which maximize the conditional entropy subject to the observed first and second moments since we consider the latter as the simplest conditionals. Second, we have fitted the data with conditional probability measures being exponents of functions in an RKHS space and defined the complexity by a Hilbert-space semi-norm. Such a complexity measure has several properties that are useful for our purpose. We describe some encouraging results with both methods applied to real-world data. Moreover, we have combined constraint-based approaches to causal discovery (i.e., methods using only information on conditional statistical dependences) with our method in order to distinguish between causal hypotheses which are equivalent with respect to the imposed independences. Furthermore, we compare the performance to Bayesian approaches to causal inference.

Web [BibTex]


no image
Structure validation of the Josephin domain of ataxin-3: Conclusive evidence for an open conformation

Nicastro, G., Habeck, M., Masino, L., Svergun, DI., Pastore, A.

Journal of Biomolecular NMR, 36(4):267-277, December 2006 (article)

Abstract
The availability of new and fast tools in structure determination has led to a more than exponential growth of the number of structures solved per year. It is therefore increasingly essential to assess the accuracy of the new structures by reliable approaches able to assist validation. Here, we discuss a specific example in which the use of different complementary techniques, which include Bayesian methods and small angle scattering, resulted essential for validating the two currently available structures of the Josephin domain of ataxin-3, a protein involved in the ubiquitin/proteasome pathway and responsible for neurodegenerative spinocerebellar ataxia of type 3. Taken together, our results demonstrate that only one of the two structures is compatible with the experimental information. Based on the high precision of our refined structure, we show that Josephin contains an open cleft which could be directly implicated in the interaction with polyubiquitin chains and other partners.

Web DOI [BibTex]

Web DOI [BibTex]


no image
A Unifying View of Wiener and Volterra Theory and Polynomial Kernel Regression

Franz, M., Schölkopf, B.

Neural Computation, 18(12):3097-3118, December 2006 (article)

Abstract
Volterra and Wiener series are perhaps the best understood nonlinear system representations in signal processing. Although both approaches have enjoyed a certain popularity in the past, their application has been limited to rather low-dimensional and weakly nonlinear systems due to the exponential growth of the number of terms that have to be estimated. We show that Volterra and Wiener series can be represented implicitly as elements of a reproducing kernel Hilbert space by utilizing polynomial kernels. The estimation complexity of the implicit representation is linear in the input dimensionality and independent of the degree of nonlinearity. Experiments show performance advantages in terms of convergence, interpretability, and system sizes that can be handled.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Minimal Logical Constraint Covering Sets

Sinz, F., Schölkopf, B.

(155), Max Planck Institute for Biological Cybernetics, Tübingen, December 2006 (techreport)

Abstract
We propose a general framework for computing minimal set covers under class of certain logical constraints. The underlying idea is to transform the problem into a mathematical programm under linear constraints. In this sense it can be seen as a natural extension of the vector quantization algorithm proposed by Tipping and Schoelkopf. We show which class of logical constraints can be cast and relaxed into linear constraints and give an algorithm for the transformation.

PDF [BibTex]

PDF [BibTex]


no image
Learning Optimal EEG Features Across Time, Frequency and Space

Farquhar, J., Hill, J., Schölkopf, B.

NIPS Workshop on Current Trends in Brain-Computer Interfacing, December 2006 (talk)

PDF Web [BibTex]

PDF Web [BibTex]


no image
Semi-Supervised Learning

Zien, A.

Advanced Methods in Sequence Analysis Lectures, November 2006 (talk)

Web [BibTex]

Web [BibTex]


no image
Prediction of Protein Function from Networks

Shin, H., Tsuda, K.

In Semi-Supervised Learning, pages: 361-376, Adaptive Computation and Machine Learning, (Editors: Chapelle, O. , B. Schölkopf, A. Zien), MIT Press, Cambridge, MA, USA, November 2006 (inbook)

Abstract
In computational biology, it is common to represent domain knowledge using graphs. Frequently there exist multiple graphs for the same set of nodes, representing information from different sources, and no single graph is sufficient to predict class labels of unlabelled nodes reliably. One way to enhance reliability is to integrate multiple graphs, since individual graphs are partly independent and partly complementary to each other for prediction. In this chapter, we describe an algorithm to assign weights to multiple graphs within graph-based semi-supervised learning. Both predicting class labels and searching for weights for combining multiple graphs are formulated into one convex optimization problem. The graph-combining method is applied to functional class prediction of yeast proteins.When compared with individual graphs, the combined graph with optimized weights performs significantly better than any single graph.When compared with the semidefinite programming-based support vector machine (SDP/SVM), it shows comparable accuracy in a remarkably short time. Compared with a combined graph with equal-valued weights, our method could select important graphs without loss of accuracy, which implies the desirable property of integration with selectivity.

Web [BibTex]

Web [BibTex]


no image
Adapting Spatial Filter Methods for Nonstationary BCIs

Tomioka, R., Hill, J., Blankertz, B., Aihara, K.

In IBIS 2006, pages: 65-70, 2006 Workshop on Information-Based Induction Sciences, November 2006 (inproceedings)

Abstract
A major challenge in applying machine learning methods to Brain-Computer Interfaces (BCIs) is to overcome the possible nonstationarity in the data from the datablock the method is trained on and that the method is applied to. Assuming the joint distributions of the whitened signal and the class label to be identical in two blocks, where the whitening is done in each block independently, we propose a simple adaptation formula that is applicable to a broad class of spatial filtering methods including ICA, CSP, and logistic regression classifiers. We characterize the class of linear transformations for which the above assumption holds. Experimental results on 60 BCI datasets show improved classification accuracy compared to (a) fixed spatial filter approach (no adaptation) and (b) fixed spatial pattern approach (proposed by Hill et al., 2006 [1]).

PDF [BibTex]

PDF [BibTex]


no image
Discrete Regularization

Zhou, D., Schölkopf, B.

In Semi-supervised Learning, pages: 237-250, Adaptive computation and machine learning, (Editors: O Chapelle and B Schölkopf and A Zien), MIT Press, Cambridge, MA, USA, November 2006 (inbook)

Abstract
Many real-world machine learning problems are situated on finite discrete sets, including dimensionality reduction, clustering, and transductive inference. A variety of approaches for learning from finite sets has been proposed from different motivations and for different problems. In most of those approaches, a finite set is modeled as a graph, in which the edges encode pairwise relationships among the objects in the set. Consequently many concepts and methods from graph theory are adopted. In particular, the graph Laplacian is widely used. In this chapter we present a systemic framework for learning from a finite set represented as a graph. We develop discrete analogues of a number of differential operators, and then construct a discrete analogue of classical regularization theory based on those discrete differential operators. The graph Laplacian based approaches are special cases of this general discrete regularization framework. An important thing implied in this framework is that we have a wide choices of regularization on graph in addition to the widely-used graph Laplacian based one.

PDF Web [BibTex]

PDF Web [BibTex]


no image
New Methods for the P300 Visual Speller

Biessmann, F.

(1), (Editors: Hill, J. ), Max-Planck Institute for Biological Cybernetics, Tübingen, Germany, November 2006 (techreport)

PDF [BibTex]

PDF [BibTex]


no image
Statistical Analysis of Slow Crack Growth Experiments

Pfingsten, T., Glien, K.

Journal of the European Ceramic Society, 26(15):3061-3065, November 2006 (article)

Abstract
A common approach for the determination of Slow Crack Growth (SCG) parameters are the static and dynamic loading method. Since materials with small Weibull module show a large variability in strength, a correct statistical analysis of the data is indispensable. In this work we propose the use of the Maximum Likelihood method and a Baysian analysis, which, in contrast to the standard procedures, take into account that failure strengths are Weibull distributed. The analysis provides estimates for the SCG parameters, the Weibull module, and the corresponding confidence intervals and overcomes the necessity of manual differentiation between inert and fatigue strength data. We compare the methods to a Least Squares approach, which can be considered the standard procedure. The results for dynamic loading data from the glass sealing of MEMS devices show that the assumptions inherent to the standard approach lead to significantly different estimates.

PDF PDF DOI [BibTex]

PDF PDF DOI [BibTex]


no image
Optimizing Spatial Filters for BCI: Margin- and Evidence-Maximization Approaches

Farquhar, J., Hill, N., Schölkopf, B.

Challenging Brain-Computer Interfaces: MAIA Workshop 2006, pages: 1, November 2006 (poster)

Abstract
We present easy-to-use alternatives to the often-used two-stage Common Spatial Pattern + classifier approach for spatial filtering and classification of Event-Related Desychnronization signals in BCI. We report two algorithms that aim to optimize the spatial filters according to a criterion more directly related to the ability of the algorithms to generalize to unseen data. Both are based upon the idea of treating the spatial filter coefficients as hyperparameters of a kernel or covariance function. We then optimize these hyper-parameters directly along side the normal classifier parameters with respect to our chosen learning objective function. The two objectives considered are margin maximization as used in Support-Vector Machines and the evidence maximization framework used in Gaussian Processes. Our experiments assessed generalization error as a function of the number of training points used, on 9 BCI competition data sets and 5 offline motor imagery data sets measured in Tubingen. Both our approaches sho w consistent improvements relative to the commonly used CSP+linear classifier combination. Strikingly, the improvement is most significant in the higher noise cases, when either few trails are used for training, or with the most poorly performing subjects. This a reversal of the usual "rich get richer" effect in the development of CSP extensions, which tend to perform best when the signal is strong enough to accurately find their additional parameters. This makes our approach particularly suitable for clinical application where high levels of noise are to be expected.

PDF PDF [BibTex]

PDF PDF [BibTex]