Header logo is ei


2013


no image
Learning responsive robot behavior by imitation

Ben Amor, H., Vogt, D., Ewerton, M., Berger, E., Jung, B., Peters, J.

In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2013), pages: 3257-3264, IEEE, 2013 (inproceedings)

DOI [BibTex]

2013

DOI [BibTex]


no image
Learning Skills with Motor Primitives

Peters, J., Kober, J., Mülling, K., Kroemer, O., Neumann, G.

In Proceedings of the 16th Yale Workshop on Adaptive and Learning Systems, 2013 (inproceedings)

[BibTex]

[BibTex]


no image
Scalable Influence Estimation in Continuous-Time Diffusion Networks

Du, N., Song, L., Gomez Rodriguez, M., Zha, H.

In Advances in Neural Information Processing Systems 26, pages: 3147-3155, (Editors: C.J.C. Burges and L. Bottou and M. Welling and Z. Ghahramani and K.Q. Weinberger), 27th Annual Conference on Neural Information Processing Systems (NIPS), 2013 (inproceedings)

PDF PDF [BibTex]

PDF PDF [BibTex]


no image
Rapid Distance-Based Outlier Detection via Sampling

Sugiyama, M., Borgwardt, KM.

In Advances in Neural Information Processing Systems 26, pages: 467-475, (Editors: C.J.C. Burges and L. Bottou and M. Welling and Z. Ghahramani and K.Q. Weinberger), 27th Annual Conference on Neural Information Processing Systems (NIPS), 2013 (inproceedings)

PDF [BibTex]

PDF [BibTex]


no image
Probabilistic Movement Primitives

Paraschos, A., Daniel, C., Peters, J., Neumann, G.

In Advances in Neural Information Processing Systems 26, pages: 2616-2624, (Editors: C.J.C. Burges and L. Bottou and M. Welling and Z. Ghahramani and K.Q. Weinberger), 27th Annual Conference on Neural Information Processing Systems (NIPS), 2013 (inproceedings)

PDF PDF [BibTex]

PDF PDF [BibTex]


no image
Causal Inference on Time Series using Restricted Structural Equation Models

Peters, J., Janzing, D., Schölkopf, B.

In Advances in Neural Information Processing Systems 26, pages: 154-162, (Editors: C.J.C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K.Q. Weinberger), 27th Annual Conference on Neural Information Processing Systems (NIPS), 2013 (inproceedings)

PDF [BibTex]

PDF [BibTex]


no image
Regression-tree Tuning in a Streaming Setting

Kpotufe, S., Orabona, F.

In Advances in Neural Information Processing Systems 26, pages: 1788-1796, (Editors: C.J.C. Burges and L. Bottou and M. Welling and Z. Ghahramani and K.Q. Weinberger), 27th Annual Conference on Neural Information Processing Systems (NIPS), 2013 (inproceedings)

PDF [BibTex]

PDF [BibTex]


no image
Density estimation from unweighted k-nearest neighbor graphs: a roadmap

von Luxburg, U., Alamgir, M.

In Advances in Neural Information Processing Systems 26, pages: 225-233, (Editors: C.J.C. Burges and L. Bottou and M. Welling and Z. Ghahramani and K.Q. Weinberger), 27th Annual Conference on Neural Information Processing Systems (NIPS), 2013 (inproceedings)

PDF [BibTex]

PDF [BibTex]


no image
Open-Box Spectral Clustering: Applications to Medical Image Analysis

Schultz, T., Kindlmann, G.

IEEE Transactions on Visualization and Computer Graphics, 19(12):2100-2108, 2013 (article)

DOI [BibTex]

DOI [BibTex]


no image
PAC-Bayes-Empirical-Bernstein Inequality

Tolstikhin, I. O., Seldin, Y.

In Advances in Neural Information Processing Systems 26, pages: 109-117, (Editors: C.J.C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K.Q. Weinberger), 27th Annual Conference on Neural Information Processing Systems (NIPS), 2013 (inproceedings)

link (url) [BibTex]

link (url) [BibTex]


no image
im3shape: a maximum likelihood galaxy shear measurement code for cosmic gravitational lensing

Zuntz, J., Kacprzak, T., Voigt, L., Hirsch, M., Rowe, B., Bridle, S.

Monthly Notices of the Royal Astronomical Society, 434(2):1604-1618, Oxford University Press, 2013 (article)

DOI [BibTex]

DOI [BibTex]


no image
Linear mixed models for genome-wide association studies

Lippert, C.

University of Tübingen, Germany, 2013 (phdthesis)

[BibTex]

[BibTex]


no image
PLAL: Cluster-based active learning

Urner, R., Wulff, S., Ben-David, S.

In Proceedings of the 26th Annual Conference on Learning Theory, 30, pages: 376-397, (Editors: Shalev-Shwartz, S. and Steinwart, I.), JMLR, COLT, 2013 (inproceedings)

link (url) [BibTex]

link (url) [BibTex]


no image
Maximizing Kepler science return per telemetered pixel: Detailed models of the focal plane in the two-wheel era

Hogg, D. W., Angus, R., Barclay, T., Dawson, R., Fergus, R., Foreman-Mackey, D., Harmeling, S., Hirsch, M., Lang, D., Montet, B. T., Schiminovich, D., Schölkopf, B.

arXiv:1309.0653, 2013 (techreport)

link (url) [BibTex]

link (url) [BibTex]


no image
Accurate detection of differential RNA processing

Drewe, P., Stegle, O., Hartmann, L., Kahles, A., Bohnert, R., Wachter, A., Borgwardt, K. M., Rätsch, G.

Nucleic Acids Research, 41(10):5189-5198, 2013 (article)

DOI [BibTex]

DOI [BibTex]


no image
Monochromatic Bi-Clustering

Wulff, S., Urner, R., Ben-David, S.

In Proceedings of the 30th International Conference on Machine Learning, 28, pages: 145-153, (Editors: Dasgupta, S. and McAllester, D.), JMLR, ICML, 2013 (inproceedings)

link (url) [BibTex]

link (url) [BibTex]


no image
Maximizing Kepler science return per telemetered pixel: Searching the habitable zones of the brightest stars

Montet, B. T., Angus, R., Barclay, T., Dawson, R., Fergus, R., Foreman-Mackey, D., Harmeling, S., Hirsch, M., Hogg, D. W., Lang, D., Schiminovich, D., Schölkopf, B.

arXiv:1309.0654, 2013 (techreport)

link (url) [BibTex]

link (url) [BibTex]


no image
Detecting regulatory gene–environment interactions with unmeasured environmental factors

Fusi, N., Lippert, C., Borgwardt, K. M., Lawrence, N. D., Stegle, O.

Bioinformatics, 29(11):1382-1389, 2013 (article)

DOI [BibTex]

DOI [BibTex]


no image
Significance of variable height-bandwidth group delay filters in the spectral reconstruction of speech

Devanshu, A., Raj, A., Hegde, R. M.

INTERSPEECH 2013, 14th Annual Conference of the International Speech Communication Association, pages: 1682-1686, 2013 (conference)

link (url) [BibTex]

link (url) [BibTex]


no image
Generative Multiple-Instance Learning Models For Quantitative Electromyography

Adel, T., Smith, B., Urner, R., Stashuk, D., Lizotte, D. J.

In Proceedings of the 29th Conference on Uncertainty in Artificial Intelligence, AUAI Press, UAI, 2013 (inproceedings)

link (url) [BibTex]

link (url) [BibTex]


no image
On the Relations and Differences between Popper Dimension, Exclusion Dimension and VC-Dimension

Seldin, Y., Schölkopf, B.

In Empirical Inference - Festschrift in Honor of Vladimir N. Vapnik, pages: 53-57, 6, (Editors: Schölkopf, B., Luo, Z. and Vovk, V.), Springer, 2013 (inbook)

[BibTex]

[BibTex]


no image
Modeling and Learning Complex Motor Tasks: A case study on Robot Table Tennis

Mülling, K.

Technical University Darmstadt, Germany, 2013 (phdthesis)

[BibTex]


no image
Fragmentation of Slow Wave Sleep after Onset of Complete Locked-In State

Soekadar, S. R., Born, J., Birbaumer, N., Bensch, M., Halder, S., Murguialday, A. R., Gharabaghi, A., Nijboer, F., Schölkopf, B., Martens, S.

Journal of Clinical Sleep Medicine, 9(9):951-953, 2013 (article)

DOI [BibTex]

DOI [BibTex]


no image
Automatic Malaria Diagnosis system

Mehrjou, A., Abbasian, T., Izadi, M.

In First RSI/ISM International Conference on Robotics and Mechatronics (ICRoM), pages: 205-211, 2013 (inproceedings)

DOI [BibTex]

DOI [BibTex]


no image
Structural learning

Braun, D

Scholarpedia, 8(10):12312, October 2013 (article)

Abstract
Structural learning in motor control refers to a metalearning process whereby an agent extracts (abstract) invariants from its sensorimotor stream when experiencing a range of environments that share similar structure. Such invariants can then be exploited for faster generalization and learning-to-learn when experiencing novel, but related task environments.

DOI [BibTex]

DOI [BibTex]


no image
The effect of model uncertainty on cooperation in sensorimotor interactions

Grau-Moya, J, Hez, E, Pezzulo, G, Braun, DA

Journal of the Royal Society Interface, 10(87):1-11, October 2013 (article)

Abstract
Decision-makers have been shown to rely on probabilistic models for perception and action. However, these models can be incorrect or partially wrong in which case the decision-maker has to cope with model uncertainty. Model uncertainty has recently also been shown to be an important determinant of sensorimotor behaviour in humans that can lead to risk-sensitive deviations from Bayes optimal behaviour towards worst-case or best-case outcomes. Here, we investigate the effect of model uncertainty on cooperation in sensorimotor interactions similar to the stag-hunt game, where players develop models about the other player and decide between a pay-off-dominant cooperative solution and a risk-dominant, non-cooperative solution. In simulations, we show that players who allow for optimistic deviations from their opponent model are much more likely to converge to cooperative outcomes. We also implemented this agent model in a virtual reality environment, and let human subjects play against a virtual player. In this game, subjects' pay-offs were experienced as forces opposing their movements. During the experiment, we manipulated the risk sensitivity of the computer player and observed human responses. We found not only that humans adaptively changed their level of cooperation depending on the risk sensitivity of the computer player but also that their initial play exhibited characteristic risk-sensitive biases. Our results suggest that model uncertainty is an important determinant of cooperation in two-player sensorimotor interactions.

DOI [BibTex]

DOI [BibTex]


no image
Thermodynamics as a theory of decision-making with information-processing costs

Ortega, PA, Braun, DA

Proceedings of the Royal Society of London A, 469(2153):1-18, May 2013 (article)

Abstract
Perfectly rational decision-makers maximize expected utility, but crucially ignore the resource costs incurred when determining optimal actions. Here, we propose a thermodynamically inspired formalization of bounded rational decision-making where information processing is modelled as state changes in thermodynamic systems that can be quantified by differences in free energy. By optimizing a free energy, bounded rational decision-makers trade off expected utility gains and information-processing costs measured by the relative entropy. As a result, the bounded rational decision-making problem can be rephrased in terms of well-known variational principles from statistical physics. In the limit when computational costs are ignored, the maximum expected utility principle is recovered. We discuss links to existing decision-making frameworks and applications to human decision-making experiments that are at odds with expected utility theory. Since most of the mathematical machinery can be borrowed from statistical physics, the main contribution is to re-interpret the formalism of thermodynamic free-energy differences in terms of bounded rational decision-making and to discuss its relationship to human decision-making experiments.

DOI [BibTex]

DOI [BibTex]


no image
Abstraction in Decision-Makers with Limited Information Processing Capabilities

Genewein, T, Braun, DA

pages: 1-9, NIPS Workshop Planning with Information Constraints for Control, Reinforcement Learning, Computational Neuroscience, Robotics and Games, December 2013 (conference)

Abstract
A distinctive property of human and animal intelligence is the ability to form abstractions by neglecting irrelevant information which allows to separate structure from noise. From an information theoretic point of view abstractions are desirable because they allow for very efficient information processing. In artificial systems abstractions are often implemented through computationally costly formations of groups or clusters. In this work we establish the relation between the free-energy framework for decision-making and rate-distortion theory and demonstrate how the application of rate-distortion for decision-making leads to the emergence of abstractions. We argue that abstractions are induced due to a limit in information processing capacity.

link (url) [BibTex]

link (url) [BibTex]


no image
Bounded Rational Decision-Making in Changing Environments

Grau-Moya, J, Braun, DA

pages: 1-9, NIPS Workshop Planning with Information Constraints for Control, Reinforcement Learning, Computational Neuroscience, Robotics and Games, December 2013 (conference)

Abstract
A perfectly rational decision-maker chooses the best action with the highest utility gain from a set of possible actions. The optimality principles that describe such decision processes do not take into account the computational costs of finding the optimal action. Bounded rational decision-making addresses this problem by specifically trading off information-processing costs and expected utility. Interestingly, a similar trade-off between energy and entropy arises when describing changes in thermodynamic systems. This similarity has been recently used to describe bounded rational agents. Crucially, this framework assumes that the environment does not change while the decision-maker is computing the optimal policy. When this requirement is not fulfilled, the decision-maker will suffer inefficiencies in utility, that arise because the current policy is optimal for an environment in the past. Here we borrow concepts from non-equilibrium thermodynamics to quantify these inefficiencies and illustrate with simulations its relationship with computational resources.

link (url) [BibTex]

link (url) [BibTex]

2006


no image
Global Biclustering of Microarray Data

Wolf, T., Brors, B., Hofmann, T., Georgii, E.

In ICDMW 2006, pages: 125-129, (Editors: Tsumoto, S. , C. W. Clifton, N. Zhong, X. Wu, J. Liu, B. W. Wah, Y.-M. Cheung), IEEE Computer Society, Los Alamitos, CA, USA, Sixth IEEE International Conference on Data Mining, December 2006 (inproceedings)

Abstract
We consider the problem of simultaneously clustering genes and conditions of a gene expression data matrix. A bicluster is defined as a subset of genes that show similar behavior within a subset of conditions. Finding biclusters can be useful for revealing groups of genes involved in the same molecular process as well as groups of conditions where this process takes place. Previous work either deals with local, bicluster-based criteria or assumes a very specific structure of the data matrix (e.g. checkerboard or block-diagonal) [11]. In contrast, our goal is to find a set of flexibly arranged biclusters which is optimal in regard to a global objective function. As this is a NP-hard combinatorial problem, we describe several techniques to obtain approximate solutions. We benchmarked our approach successfully on the Alizadeh B-cell lymphoma data set [1].

Web DOI [BibTex]

2006

Web DOI [BibTex]


no image
Conformal Multi-Instance Kernels

Blaschko, M., Hofmann, T.

In NIPS 2006 Workshop on Learning to Compare Examples, pages: 1-6, NIPS Workshop on Learning to Compare Examples, December 2006 (inproceedings)

Abstract
In the multiple instance learning setting, each observation is a bag of feature vectors of which one or more vectors indicates membership in a class. The primary task is to identify if any vectors in the bag indicate class membership while ignoring vectors that do not. We describe here a kernel-based technique that defines a parametric family of kernels via conformal transformations and jointly learns a discriminant function over bags together with the optimal parameter settings of the kernel. Learning a conformal transformation effectively amounts to weighting regions in the feature space according to their contribution to classification accuracy; regions that are discriminative will be weighted higher than regions that are not. This allows the classifier to focus on regions contributing to classification accuracy while ignoring regions that correspond to vectors found both in positive and in negative bags. We show how parameters of this transformation can be learned for support vector machines by posing the problem as a multiple kernel learning problem. The resulting multiple instance classifier gives competitive accuracy for several multi-instance benchmark datasets from different domains.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Some observations on the pedestal effect or dipper function

Henning, B., Wichmann, F.

Journal of Vision, 6(13):50, 2006 Fall Vision Meeting of the Optical Society of America, December 2006 (poster)

Abstract
The pedestal effect is the large improvement in the detectabilty of a sinusoidal “signal” grating observed when the signal is added to a masking or “pedestal” grating of the same spatial frequency, orientation, and phase. We measured the pedestal effect in both broadband and notched noise - noise from which a 1.5-octave band centred on the signal frequency had been removed. Although the pedestal effect persists in broadband noise, it almost disappears in the notched noise. Furthermore, the pedestal effect is substantial when either high- or low-pass masking noise is used. We conclude that the pedestal effect in the absence of notched noise results principally from the use of information derived from channels with peak sensitivities at spatial frequencies different from that of the signal and pedestal. The spatial-frequency components of the notched noise above and below the spatial frequency of the signal and pedestal prevent the use of information about changes in contrast carried in channels tuned to spatial frequencies that are very much different from that of the signal and pedestal. Thus the pedestal or dipper effect measured without notched noise is not a characteristic of individual spatial-frequency tuned channels.

Web DOI [BibTex]

Web DOI [BibTex]


no image
A Kernel Method for the Two-Sample-Problem

Gretton, A., Borgwardt, K., Rasch, M., Schölkopf, B., Smola, A.

20th Annual Conference on Neural Information Processing Systems (NIPS), December 2006 (talk)

Abstract
We propose two statistical tests to determine if two samples are from different distributions. Our test statistic is in both cases the distance between the means of the two samples mapped into a reproducing kernel Hilbert space (RKHS). The first test is based on a large deviation bound for the test statistic, while the second is based on the asymptotic distribution of this statistic. We show that the test statistic can be computed in $O(m^2)$ time. We apply our approach to a variety of problems, including attribute matching for databases using the Hungarian marriage method, where our test performs strongly. We also demonstrate excellent performance when comparing distributions over graphs, for which no alternative tests currently exist.

PDF [BibTex]

PDF [BibTex]


no image
Ab-initio gene finding using machine learning

Schweikert, G., Zeller, G., Zien, A., Ong, C., de Bona, F., Sonnenburg, S., Phillips, P., Rätsch, G.

NIPS Workshop on New Problems and Methods in Computational Biology, December 2006 (talk)

Web [BibTex]

Web [BibTex]


no image
Reinforcement Learning by Reward-Weighted Regression

Peters, J.

NIPS Workshop: Towards a New Reinforcement Learning? , December 2006 (talk)

Web [BibTex]

Web [BibTex]


no image
Graph boosting for molecular QSAR analysis

Saigo, H., Kadowaki, T., Kudo, T., Tsuda, K.

NIPS Workshop on New Problems and Methods in Computational Biology, December 2006 (talk)

Abstract
We propose a new boosting method that systematically combines graph mining and mathematical programming-based machine learning. Informative and interpretable subgraph features are greedily found by a series of graph mining calls. Due to our mathematical programming formulation, subgraph features and pre-calculated real-valued features are seemlessly integrated. We tested our algorithm on a quantitative structure-activity relationship (QSAR) problem, which is basically a regression problem when given a set of chemical compounds. In benchmark experiments, the prediction accuracy of our method favorably compared with the best results reported on each dataset.

Web [BibTex]

Web [BibTex]


no image
A New Projected Quasi-Newton Approach for the Nonnegative Least Squares Problem

Kim, D., Sra, S., Dhillon, I.

(TR-06-54), Univ. of Texas, Austin, December 2006 (techreport)

PDF [BibTex]

PDF [BibTex]


no image
Inferring Causal Directions by Evaluating the Complexity of Conditional Distributions

Sun, X., Janzing, D., Schölkopf, B.

NIPS Workshop on Causality and Feature Selection, December 2006 (talk)

Abstract
We propose a new approach to infer the causal structure that has generated the observed statistical dependences among n random variables. The idea is that the factorization of the joint measure of cause and effect into P(cause)P(effect|cause) leads typically to simpler conditionals than non-causal factorizations. To evaluate the complexity of the conditionals we have tried two methods. First, we have compared them to those which maximize the conditional entropy subject to the observed first and second moments since we consider the latter as the simplest conditionals. Second, we have fitted the data with conditional probability measures being exponents of functions in an RKHS space and defined the complexity by a Hilbert-space semi-norm. Such a complexity measure has several properties that are useful for our purpose. We describe some encouraging results with both methods applied to real-world data. Moreover, we have combined constraint-based approaches to causal discovery (i.e., methods using only information on conditional statistical dependences) with our method in order to distinguish between causal hypotheses which are equivalent with respect to the imposed independences. Furthermore, we compare the performance to Bayesian approaches to causal inference.

Web [BibTex]


no image
Information-theoretic Metric Learning

Davis, J., Kulis, B., Sra, S., Dhillon, I.

In NIPS 2006 Workshop on Learning to Compare Examples, pages: 1-5, NIPS Workshop on Learning to Compare Examples, December 2006 (inproceedings)

Abstract
We formulate the metric learning problem as that of minimizing the differential relative entropy between two multivariate Gaussians under constraints on the Mahalanobis distance function. Via a surprising equivalence, we show that this problem can be solved as a low-rank kernel learning problem. Specifically, we minimize the Burg divergence of a low-rank kernel to an input kernel, subject to pairwise distance constraints. Our approach has several advantages over existing methods. First, we present a natural information-theoretic formulation for the problem. Second, the algorithm utilizes the methods developed by Kulis et al. [6], which do not involve any eigenvector computation; in particular, the running time of our method is faster than most existing techniques. Third, the formulation offers insights into connections between metric learning and kernel learning.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Pattern Mining in Frequent Dynamic Subgraphs

Borgwardt, KM., Kriegel, H-P., Wackersreuther, P.

In pages: 818-822, (Editors: Clifton, C.W.), IEEE Computer Society, Los Alamitos, CA, USA, Sixth International Conference on Data Mining (ICDM), December 2006 (inproceedings)

Abstract
Graph-structured data is becoming increasingly abundant in many application domains. Graph mining aims at finding interesting patterns within this data that represent novel knowledge. While current data mining deals with static graphs that do not change over time, coming years will see the advent of an increasing number of time series of graphs. In this article, we investigate how pattern mining on static graphs can be extended to time series of graphs. In particular, we are considering dynamic graphs with edge insertions and edge deletions over time. We define frequency in this setting and provide algorithmic solutions for finding frequent dynamic subgraph patterns. Existing subgraph mining algorithms can be easily integrated into our framework to make them handle dynamic graphs. Experimental results on real-world data confirm the practical feasibility of our approach.

Web DOI [BibTex]

Web DOI [BibTex]


no image
Structure validation of the Josephin domain of ataxin-3: Conclusive evidence for an open conformation

Nicastro, G., Habeck, M., Masino, L., Svergun, DI., Pastore, A.

Journal of Biomolecular NMR, 36(4):267-277, December 2006 (article)

Abstract
The availability of new and fast tools in structure determination has led to a more than exponential growth of the number of structures solved per year. It is therefore increasingly essential to assess the accuracy of the new structures by reliable approaches able to assist validation. Here, we discuss a specific example in which the use of different complementary techniques, which include Bayesian methods and small angle scattering, resulted essential for validating the two currently available structures of the Josephin domain of ataxin-3, a protein involved in the ubiquitin/proteasome pathway and responsible for neurodegenerative spinocerebellar ataxia of type 3. Taken together, our results demonstrate that only one of the two structures is compatible with the experimental information. Based on the high precision of our refined structure, we show that Josephin contains an open cleft which could be directly implicated in the interaction with polyubiquitin chains and other partners.

Web DOI [BibTex]

Web DOI [BibTex]


no image
Probabilistic inference for solving (PO)MDPs

Toussaint, M., Harmeling, S., Storkey, A.

(934), School of Informatics, University of Edinburgh, December 2006 (techreport)

PDF [BibTex]

PDF [BibTex]


no image
A Unifying View of Wiener and Volterra Theory and Polynomial Kernel Regression

Franz, M., Schölkopf, B.

Neural Computation, 18(12):3097-3118, December 2006 (article)

Abstract
Volterra and Wiener series are perhaps the best understood nonlinear system representations in signal processing. Although both approaches have enjoyed a certain popularity in the past, their application has been limited to rather low-dimensional and weakly nonlinear systems due to the exponential growth of the number of terms that have to be estimated. We show that Volterra and Wiener series can be represented implicitly as elements of a reproducing kernel Hilbert space by utilizing polynomial kernels. The estimation complexity of the implicit representation is linear in the input dimensionality and independent of the degree of nonlinearity. Experiments show performance advantages in terms of convergence, interpretability, and system sizes that can be handled.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Minimal Logical Constraint Covering Sets

Sinz, F., Schölkopf, B.

(155), Max Planck Institute for Biological Cybernetics, Tübingen, December 2006 (techreport)

Abstract
We propose a general framework for computing minimal set covers under class of certain logical constraints. The underlying idea is to transform the problem into a mathematical programm under linear constraints. In this sense it can be seen as a natural extension of the vector quantization algorithm proposed by Tipping and Schoelkopf. We show which class of logical constraints can be cast and relaxed into linear constraints and give an algorithm for the transformation.

PDF [BibTex]

PDF [BibTex]


no image
Learning Optimal EEG Features Across Time, Frequency and Space

Farquhar, J., Hill, J., Schölkopf, B.

NIPS Workshop on Current Trends in Brain-Computer Interfacing, December 2006 (talk)

PDF Web [BibTex]

PDF Web [BibTex]


no image
Acquiring web page information without commitment to downloading the web page

Heilbron, L., Platt, J. C., Schölkopf, B., Simard, P. Y.

United States Patent, No 7155489, December 2006 (patent)

[BibTex]

[BibTex]


no image
Semi-Supervised Learning

Zien, A.

Advanced Methods in Sequence Analysis Lectures, November 2006 (talk)

Web [BibTex]

Web [BibTex]