Header logo is ei


2004


no image
Fast Binary and Multi-Output Reduced Set Selection

Weston, J., Bakir, G.

(132), Max Planck Institute for Biological Cybernetics, Tübingen, Germany, November 2004 (techreport)

Abstract
We propose fast algorithms for reducing the number of kernel evaluations in the testing phase for methods such as Support Vector Machines (SVM) and Ridge Regression (RR). For non-sparse methods such as RR this results in significantly improved prediction time. For binary SVMs, which are already sparse in their expansion, the pay off is mainly in the cases of noisy or large-scale problems. However, we then further develop our method for multi-class problems where, after choosing the expansion to find vectors which describe all the hyperplanes jointly, we again achieve significant gains.

PostScript [BibTex]

2004

PostScript [BibTex]


no image
Joint Kernel Maps

Weston, J., Schölkopf, B., Bousquet, O., Mann, .., Noble, W.

(131), Max-Planck-Institute for Biological Cybernetics, Tübingen, November 2004 (techreport)

PDF [BibTex]

PDF [BibTex]


no image
S-cones contribute to flicker brightness in human vision

Wehrhahn, C., Hill, NJ., Dillenburger, B.

34(174.12), 34th Annual Meeting of the Society for Neuroscience (Neuroscience), October 2004 (poster)

Abstract
In the retina of primates three cone types sensitive to short, middle and long wavelengths of light convert photons into electrical signals. Many investigators have presented evidence that, in color normal observers, the signals of cones sensitive to short wavelengths of light (S-cones) do not contribute to the perception of brightness of a colored surface when this is alternated with an achromatic reference (flicker brightness). Other studies indicate that humans do use S-cone signals when performing this task. Common to all these studies is the small number of observers, whose performance data are reported. Considerable variability in the occurrence of cone types across observers has been found, but, to our knowledge, no cone counts exist from larger populations of humans. We reinvestigated how much the S-cones contribute to flicker brightness. 76 color normal observers were tested in a simple psychophysical procedure neutral to the cone type occurence (Teufel & Wehrhahn (2000), JOSA A 17: 994 - 1006). The data show that, in the majority of our observers, S-cones provide input with a negative sign - relative to L- and M-cone contribution - in the task in question. There is indeed considerable between-subject variability such that for 20 out of 76 observers the magnitude of this input does not differ significantly from 0. Finally, we argue that the sign of S-cone contribution to flicker brightness perception by an observer cannot be used to infer the relative sign their contributions to the neuronal signals carrying the information leading to the perception of flicker brightness. We conclude that studies which use only a small number of observers may easily fail to find significant evidence for the small but significant population tendency for the S-cones to contribute to flicker brightness. Our results confirm all earlier results and reconcile their contradictory interpretations.

Web [BibTex]

Web [BibTex]


no image
Learning Motor Primitives with Reinforcement Learning

Peters, J., Schaal, S.

AAAI Fall Symposium on Real-Life Reinforcement Learning 2004, 2004, pages: 1, October 2004 (poster)

Web [BibTex]

Web [BibTex]


no image
Semi-Supervised Induction

Yu, K., Tresp, V., Zhou, D.

(141), Max Planck Institute for Biological Cybernetics, Tuebingen, Germany, August 2004 (techreport)

Abstract
Considerable progress was recently achieved on semi-supervised learning, which differs from the traditional supervised learning by additionally exploring the information of the unlabelled examples. However, a disadvantage of many existing methods is that it does not generalize to unseen inputs. This paper investigates learning methods that effectively make use of both labelled and unlabelled data to build predictive functions, which are defined on not just the seen inputs but the whole space. As a nice property, the proposed method allows effcient training and can easily handle new test points. We validate the method based on both toy data and real world data sets.

PDF PDF [BibTex]

PDF PDF [BibTex]


no image
On Hausdorff Distance Measures

Shapiro, MD., Blaschko, MB.

Department of Computer Science, University of Massachusetts Amherst, August 2004 (techreport)

[BibTex]

[BibTex]


no image
Object categorization with SVM: kernels for local features

Eichhorn, J., Chapelle, O.

(137), Max Planck Institute for Biological Cybernetics, Tübingen, Germany, July 2004 (techreport)

Abstract
In this paper, we propose to combine an efficient image representation based on local descriptors with a Support Vector Machine classifier in order to perform object categorization. For this purpose, we apply kernels defined on sets of vectors. After testing different combinations of kernel / local descriptors, we have been able to identify a very performant one.

PDF [BibTex]

PDF [BibTex]


no image
Hilbertian Metrics and Positive Definite Kernels on Probability Measures

Hein, M., Bousquet, O.

(126), Max Planck Institute for Biological Cybernetics, Tübingen, Germany, July 2004 (techreport)

Abstract
We investigate the problem of defining Hilbertian metrics resp. positive definite kernels on probability measures, continuing previous work. This type of kernels has shown very good results in text classification and has a wide range of possible applications. In this paper we extend the two-parameter family of Hilbertian metrics of Topsoe such that it now includes all commonly used Hilbertian metrics on probability measures. This allows us to do model selection among these metrics in an elegant and unified way. Second we investigate further our approach to incorporate similarity information of the probability space into the kernel. The analysis provides a better understanding of these kernels and gives in some cases a more efficient way to compute them. Finally we compare all proposed kernels in two text and one image classification problem.

PDF [BibTex]

PDF [BibTex]


no image
Kernels, Associated Structures and Generalizations

Hein, M., Bousquet, O.

(127), Max Planck Institute for Biological Cybernetics, Tübingen, Germany, July 2004 (techreport)

Abstract
This paper gives a survey of results in the mathematical literature on positive definite kernels and their associated structures. We concentrate on properties which seem potentially relevant for Machine Learning and try to clarify some results that have been misused in the literature. Moreover we consider different lines of generalizations of positive definite kernels. Namely we deal with operator-valued kernels and present the general framework of Hilbertian subspaces of Schwartz which we use to introduce kernels which are distributions. Finally indefinite kernels and their associated reproducing kernel spaces are considered.

PDF [BibTex]

PDF [BibTex]


no image
Triangle Fixing Algorithms for the Metric Nearness Problem

Dhillon, I., Sra, S., Tropp, J.

Univ. of Texas at Austin, June 2004 (techreport)

PDF [BibTex]

PDF [BibTex]


no image
Learning Motor Primitives with Reinforcement Learning

Peters, J., Schaal, S.

11th Joint Symposium on Neural Computation (JSNC 2004), 11, pages: 1, May 2004 (poster)

Abstract
One of the major challenges in action generation for robotics and in the understanding of human motor control is to learn the "building blocks of move- ment generation," or more precisely, motor primitives. Recently, Ijspeert et al. [1, 2] suggested a novel framework how to use nonlinear dynamical systems as motor primitives. While a lot of progress has been made in teaching these mo- tor primitives using supervised or imitation learning, the self-improvement by interaction of the system with the environment remains a challenging problem. In this poster, we evaluate different reinforcement learning approaches can be used in order to improve the performance of motor primitives. For pursuing this goal, we highlight the difficulties with current reinforcement learning methods, and line out how these lead to a novel algorithm which is based on natural policy gradients [3]. We compare this algorithm to previous reinforcement learning algorithms in the context of dynamic motor primitive learning, and show that it outperforms these by at least an order of magnitude. We demonstrate the efficiency of the resulting reinforcement learning method for creating complex behaviors for automous robotics. The studied behaviors will include both discrete, finite tasks such as baseball swings, as well as complex rhythmic patterns as they occur in biped locomotion.

Web [BibTex]

Web [BibTex]


no image
Kamerakalibrierung und Tiefenschätzung: Ein Vergleich von klassischer Bündelblockausgleichung und statistischen Lernalgorithmen

Sinz, FH.

Wilhelm-Schickard-Institut für Informatik, Universität Tübingen, Tübingen, Germany, March 2004 (techreport)

Abstract
Die Arbeit verleicht zwei Herangehensweisen an das Problem der Sch{\"a}tzung der r{\"a}umliche Position eines Punktes aus den Bildkoordinaten in zwei verschiedenen Kameras. Die klassische Methode der B{\"u}ndelblockausgleichung modelliert zwei Einzelkameras und sch{\"a}tzt deren {\"a}ußere und innere Orientierung mit einer iterativen Kalibrationsmethode, deren Konvergenz sehr stark von guten Startwerten abh{\"a}ngt. Die Tiefensch{\"a}tzung eines Punkts geschieht durch die Invertierung von drei der insgesamt vier Projektionsgleichungen der Einzalkameramodelle. Die zweite Methode benutzt Kernel Ridge Regression und Support Vector Regression, um direkt eine Abbildung von den Bild- auf die Raumkoordinaten zu lernen. Die Resultate zeigen, daß der Ansatz mit maschinellem Lernen, neben einer erheblichen Vereinfachung des Kalibrationsprozesses, zu h{\"o}heren Positionsgenaugikeiten f{\"u}hren kann.

PDF [BibTex]

PDF [BibTex]


no image
Human Classification Behaviour Revisited by Machine Learning

Graf, A., Wichmann, F., Bülthoff, H., Schölkopf, B.

7, pages: 134, (Editors: Bülthoff, H.H., H.A. Mallot, R. Ulrich and F.A. Wichmann), 7th T{\"u}bingen Perception Conference (TWK), Febuary 2004 (poster)

Abstract
We attempt to understand visual classication in humans using both psychophysical and machine learning techniques. Frontal views of human faces were used for a gender classication task. Human subjects classied the faces and their gender judgment, reaction time (RT) and condence rating (CR) were recorded for each face. RTs are longer for incorrect answers than for correct ones, high CRs are correlated with low classication errors and RTs decrease as the CRs increase. This results suggest that patterns difcult to classify need more computation by the brain than patterns easy to classify. Hyperplane learning algorithms such as Support Vector Machines (SVM), Relevance Vector Machines (RVM), Prototype learners (Prot) and K-means learners (Kmean) were used on the same classication task using the Principal Components of the texture and oweld representation of the faces. The classication performance of the learning algorithms was estimated using the face database with the true gender of the faces as labels, and also with the gender estimated by the subjects. Kmean yield a classication performance close to humans while SVM and RVM are much better. This surprising behaviour may be due to the fact that humans are trained on real faces during their lifetime while they were here tested on articial ones, while the algorithms were trained and tested on the same set of stimuli. We then correlated the human responses to the distance of the stimuli to the separating hyperplane (SH) of the learning algorithms. On the whole stimuli far from the SH are classied more accurately, faster and with higher condence than those near to the SH if we pool data across all our subjects and stimuli. We also nd three noteworthy results. First, SVMs and RVMs can learn to classify faces using the subjects' labels but perform much better when using the true labels. Second, correlating the average response of humans (classication error, RT or CR) with the distance to the SH on a face-by-face basis using Spearman's rank correlation coefcients shows that RVMs recreate human performance most closely in every respect. Third, the mean-of-class prototype, its popularity in neuroscience notwithstanding, is the least human-like classier in all cases examined.

Web [BibTex]

Web [BibTex]


no image
m-Alternative-Forced-Choice: Improving the Efficiency of the Method of Constant Stimuli

Jäkel, F., Hill, J., Wichmann, F.

7, pages: 118, 7th T{\"u}bingen Perception Conference (TWK), February 2004 (poster)

Abstract
We explored several ways to improve the efficiency of measuring psychometric functions without resorting to adaptive procedures. a) The number m of alternatives in an m-alternative-forced-choice (m-AFC) task improves the efficiency of the method of constant stimuli. b) When alternatives are presented simultaneously on different positions on a screen rather than sequentially time can be saved and memory load for the subject can be reduced. c) A touch-screen can further help to make the experimental procedure more intuitive. We tested these ideas in the measurement of contrast sensitivity and compared them to results obtained by sequential presentation in two-interval-forced-choice (2-IFC). Qualitatively all methods (m-AFC and 2-IFC) recovered the characterictic shape of the contrast sensitivity function in three subjects. The m-AFC paradigm only took about 60% of the time of the 2-IFC task. We tried m=2,4,8 and found 4-AFC to give the best model fits and 2-AFC to have the least bias.

Web [BibTex]

Web [BibTex]


no image
Efficient Approximations for Support Vector Classifiers

Kienzle, W., Franz, M.

7, pages: 68, 7th T{\"u}bingen Perception Conference (TWK), February 2004 (poster)

Abstract
In face detection, support vector machines (SVM) and neural networks (NN) have been shown to outperform most other classication methods. While both approaches are learning-based, there are distinct advantages and drawbacks to each method: NNs are difcult to design and train but can lead to very small and efcient classiers. In comparison, SVM model selection and training is rather straightforward, and, more importantly, guaranteed to converge to a globally optimal (in the sense of training errors) solution. Unfortunately, SVM classiers tend to have large representations which are inappropriate for time-critical image processing applications. In this work, we examine various existing and new methods for simplifying support vector decision rules. Our goal is to obtain efcient classiers (as with NNs) while keeping the numerical and statistical advantages of SVMs. For a given SVM solution, we compute a cascade of approximations with increasing complexities. Each classier is tuned so that the detection rate is near 100%. At run-time, the rst (simplest) detector is evaluated on the whole image. Then, any subsequent classier is applied only to those positions that have been classied as positive throughout all previous stages. The false positive rate at the end equals that of the last (i.e. most complex) detector. In contrast, since many image positions are discarded by lower-complexity classiers, the average computation time per patch decreases signicantly compared to the time needed for evaluating the highest-complexity classier alone.

Web [BibTex]

Web [BibTex]


no image
Selective Attention to Auditory Stimuli: A Brain-Computer Interface Paradigm

Hill, N., Lal, T., Schröder, M., Hinterberger, T., Birbaumer, N., Schölkopf, B.

7, pages: 102, (Editors: Bülthoff, H.H., H.A. Mallot, R. Ulrich and F.A. Wichmann), 7th T{\"u}bingen Perception Conference (TWK), February 2004 (poster)

Abstract
During the last 20 years several paradigms for Brain Computer Interfaces have been proposed— see [1] for a recent review. They can be divided into (a) stimulus-driven paradigms, using e.g. event-related potentials or visual evoked potentials from an EEG signal, and (b) patient-driven paradigms such as those that use premotor potentials correlated with imagined action, or slow cortical potentials (e.g. [2]). Our aim is to develop a stimulus-driven paradigm that is applicable in practice to patients. Due to the unreliability of visual perception in “locked-in” patients in the later stages of disorders such as Amyotrophic Lateral Sclerosis, we concentrate on the auditory modality. Speci- cally, we look for the effects, in the EEG signal, of selective attention to one of two concurrent auditory stimulus streams, exploiting the increased activation to attended stimuli that is seen under some circumstances [3]. We present the results of our preliminary experiments on normal subjects. On each of 400 trials, two repetitive stimuli (sequences of drum-beats or other pulsed stimuli) could be heard simultaneously. The two stimuli were distinguishable from one another by their acoustic properties, by their source location (one from a speaker to the left of the subject, the other from the right), and by their differing periodicities. A visual cue preceded the stimulus by 500 msec, indicating which of the two stimuli to attend to, and the subject was instructed to count the beats in the attended stimulus stream. There were up to 6 beats of each stimulus: with equal probability on each trial, all 6 were played, or the fourth was omitted, or the fth was omitted. The 40-channel EEG signals were analyzed ofine to reconstruct which of the streams was attended on each trial. A linear Support Vector Machine [4] was trained on a random subset of the data and tested on the remainder. Results are compared from two types of pre-processing of the signal: for each stimulus stream, (a) EEG signals at the stream's beat periodicity are emphasized, or (b) EEG signals following beats are contrasted with those following missing beats. Both forms of pre-processing show promising results, i.e. that selective attention to one or the other auditory stream yields signals that are classiable signicantly above chance performance. In particular, the second pre-processing was found to be robust to reduction in the number of features used for classication (cf. [5]), helping us to eliminate noise.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Texture and Haptic Cues in Slant Discrimination: Measuring the Effect of Texture Type

Rosas, P., Wichmann, F., Ernst, M., Wagemans, J.

7, pages: 165, (Editors: Bülthoff, H. H., H. A. Mallot, R. Ulrich, F. A. Wichmann), 7th T{\"u}bingen Perception Conference (TWK), February 2004 (poster)

Abstract
In a number of models of depth cue combination the depth percept is constructed via a weighted average combination of independent depth estimations. The inuence of each cue in such average depends on the reliability of the source of information [1,5]. In particular, Ernst and Banks (2002) formulate such combination as that of the minimum variance unbiased estimator that can be constructed from the available cues. We have observed systematic differences in slant discrimination performance of human observers when different types of textures were used as cue to slant [4]. If the depth percept behaves as described above, our measurements of the slopes of the psychometric functions provide the predicted weights for the texture cue for the ranked texture types. However, the results for slant discrimination obtained when combining these texture types with object motion results are difcult to reconcile with the minimum variance unbiased estimator model [3]. This apparent failure of such model might be explained by the existence of a coupling of texture and motion, violating the assumption of independence of cues. Hillis, Ernst, Banks, and Landy (2002) [2] have shown that while for between-modality combination the human visual system has access to the single-cue information, for withinmodality combination (visual cues) the single-cue information is lost. This suggests a coupling between visual cues and independence between visual and haptic cues. Then, in the present study we combined the different texture types with haptic information in a slant discrimination task, to test whether in the between-modality condition these cues are combined as predicted by an unbiased, minimum variance estimator model. The measured weights for the cues were consistent with a combination rule sensitive to the reliability of the sources of information, but did not match the predictions of a statistically optimal combination.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Efficient Approximations for Support Vector Classiers

Kienzle, W., Franz, M.

7, pages: 68, 7th T{\"u}bingen Perception Conference (TWK), February 2004 (poster)

Abstract
In face detection, support vector machines (SVM) and neural networks (NN) have been shown to outperform most other classication methods. While both approaches are learning-based, there are distinct advantages and drawbacks to each method: NNs are difcult to design and train but can lead to very small and efcient classiers. In comparison, SVM model selection and training is rather straightforward, and, more importantly, guaranteed to converge to a globally optimal (in the sense of training errors) solution. Unfortunately, SVM classiers tend to have large representations which are inappropriate for time-critical image processing applications. In this work, we examine various existing and new methods for simplifying support vector decision rules. Our goal is to obtain efcient classiers (as with NNs) while keeping the numerical and statistical advantages of SVMs. For a given SVM solution, we compute a cascade of approximations with increasing complexities. Each classier is tuned so that the detection rate is near 100%. At run-time, the rst (simplest) detector is evaluated on the whole image. Then, any subsequent classier is applied only to those positions that have been classied as positive throughout all previous stages. The false positive rate at the end equals that of the last (i.e. most complex) detector. In contrast, since many image positions are discarded by lower-complexity classiers, the average computation time per patch decreases signicantly compared to the time needed for evaluating the highest-complexity classier alone.

Web [BibTex]

Web [BibTex]


no image
EEG Channel Selection for Brain Computer Interface Systems Based on Support Vector Methods

Schröder, M., Lal, T., Bogdan, M., Schölkopf, B.

7, pages: 50, (Editors: Bülthoff, H.H., H.A. Mallot, R. Ulrich and F.A. Wichmann), 7th T{\"u}bingen Perception Conference (TWK), February 2004 (poster)

Abstract
A Brain Computer Interface (BCI) system allows the direct interpretation of brain activity patterns (e.g. EEG signals) by a computer. Typical BCI applications comprise spelling aids or environmental control systems supporting paralyzed patients that have lost motor control completely. The design of an EEG based BCI system requires good answers for the problem of selecting useful features during the performance of a mental task as well as for the problem of classifying these features. For the special case of choosing appropriate EEG channels from several available channels, we propose the application of variants of the Support Vector Machine (SVM) for both problems. Although these algorithms do not rely on prior knowledge they can provide more accurate solutions than standard lter methods [1] for feature selection which usually incorporate prior knowledge about neural activity patterns during the performed mental tasks. For judging the importance of features we introduce a new relevance measure and apply it to EEG channels. Although we base the relevance measure for this purpose on the previously introduced algorithms, it does in general not depend on specic algorithms but can be derived using arbitrary combinations of feature selectors and classifiers.

Web [BibTex]

Web [BibTex]


no image
Learning Depth

Sinz, F., Franz, MO.

pages: 69, (Editors: H.H.Bülthoff, H.A.Mallot, R.Ulrich,F.A.Wichmann), 7th T{\"u}bingen Perception Conference (TWK), February 2004 (poster)

Abstract
The depth of a point in space can be estimated by observing its image position from two different viewpoints. The classical approach to stereo vision calculates depth from the two projection equations which together form a stereocamera model. An unavoidable preparatory work for this solution is a calibration procedure, i.e., estimating the external (position and orientation) and internal (focal length, lens distortions etc.) parameters of each camera from a set of points with known spatial position and their corresponding image positions. This is normally done by iteratively linearizing the single camera models and reestimating their parameters according to the error on the known datapoints. The advantage of the classical method is the maximal usage of prior knowledge about the underlying physical processes and the explicit estimation of meaningful model parameters such as focal length or camera position in space. However, the approach neglects the nonlinear nature of the problem such that the results critically depend on the choice of the initial values for the parameters. In this study, we approach the depth estimation problem from a different point of view by applying generic machine learning algorithms to learn the mapping from image coordinates to spatial position. These algorithms do not require any domain knowledge and are able to learn nonlinear functions by mapping the inputs into a higher-dimensional space. Compared to classical calibration, machine learning methods give a direct solution to the depth estimation problem which means that the values of the stereocamera parameters cannot be extracted from the learned mapping. On the poster, we compare the performance of classical camera calibration to that of different machine learning algorithms such as kernel ridge regression, gaussian processes and support vector regression. Our results indicate that generic learning approaches can lead to higher depth accuracies than classical calibration although no domain knowledge is used.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Multivariate Regression with Stiefel Constraints

Bakir, G., Gretton, A., Franz, M., Schölkopf, B.

(128), MPI for Biological Cybernetics, Spemannstr 38, 72076, Tuebingen, 2004 (techreport)

Abstract
We introduce a new framework for regression between multi-dimensional spaces. Standard methods for solving this problem typically reduce the problem to one-dimensional regression by choosing features in the input and/or output spaces. These methods, which include PLS (partial least squares), KDE (kernel dependency estimation), and PCR (principal component regression), select features based on different a-priori judgments as to their relevance. Moreover, loss function and constraints are chosen not primarily on statistical grounds, but to simplify the resulting optimisation. By contrast, in our approach the feature construction and the regression estimation are performed jointly, directly minimizing a loss function that we specify, subject to a rank constraint. A major advantage of this approach is that the loss is no longer chosen according to the algorithmic requirements, but can be tailored to the characteristics of the task at hand; the features will then be optimal with respect to this objective. Our approach also allows for the possibility of using a regularizer in the optimization. Finally, by processing the observations sequentially, our algorithm is able to work on large scale problems.

PDF [BibTex]

PDF [BibTex]


no image
Neural mechanisms underlying control of a Brain-Computer-Interface (BCI): Simultaneous recording of bold-response and EEG

Hinterberger, T., Wilhelm, B., Veit, R., Weiskopf, N., Lal, TN., Birbaumer, N.

2004 (poster)

Abstract
Brain computer interfaces (BCI) enable humans or animals to communicate or activate external devices without muscle activity using electric brain signals. The BCI Thought Translation Device (TTD) uses learned regulation of slow cortical potentials (SCPs), a skill most people and paralyzed patients can acquire with training periods of several hours up to months. The neurophysiological mechanisms and anatomical sources of SCPs and other event-related brain macro-potentials are well understood, but the neural mechanisms underlying learning of the self-regulation skill for BCI-use are unknown. To uncover the relevant areas of brain activation during regulation of SCPs, the TTD was combined with functional MRI and EEG was recorded inside the MRI scanner in twelve healthy participants who have learned to regulate their SCP with feedback and reinforcement. The results demonstrate activation of specific brain areas during execution of the brain regulation skill: successf! ul control of cortical positivity allowing a person to activate an external device was closely related to an increase of BOLD (blood oxygen level dependent) response in the basal ganglia and frontal premotor deactivation indicating learned regulation of a cortical-striatal loop responsible for local excitation thresholds of cortical assemblies. The data suggest that human users of a BCI learn the regulation of cortical excitation thresholds of large neuronal assemblies as a prerequisite of direct brain communication: the learning of this skill depends critically on an intact and flexible interaction between these cortico-basal ganglia-circuits. Supported by the Deutsche Forschungsgemeinschaft (DFG) and the National Institute of Health (NIH).

[BibTex]

[BibTex]


no image
Learning from Labeled and Unlabeled Data Using Random Walks

Zhou, D., Schölkopf, B.

Max Planck Institute for Biological Cybernetics, 2004 (techreport)

Abstract
We consider the general problem of learning from labeled and unlabeled data. Given a set of points, some of them are labeled, and the remaining points are unlabeled. The goal is to predict the labels of the unlabeled points. Any supervised learning algorithm can be applied to this problem, for instance, Support Vector Machines (SVMs). The problem of our interest is if we can implement a classifier which uses the unlabeled data information in some way and has higher accuracy than the classifiers which use the labeled data only. Recently we proposed a simple algorithm, which can substantially benefit from large amounts of unlabeled data and demonstrates clear superiority to supervised learning methods. In this paper we further investigate the algorithm using random walks and spectral graph theory, which shed light on the key steps in this algorithm.

PDF PostScript [BibTex]

PDF PostScript [BibTex]


no image
Masking by plaid patterns revisited

Wichmann, F.

Experimentelle Psychologie. Beitr{\"a}ge zur 46. Tagung experimentell arbeitender Psychologen, 46, pages: 285, 2004 (poster)

[BibTex]

[BibTex]


no image
Behaviour and Convergence of the Constrained Covariance

Gretton, A., Smola, A., Bousquet, O., Herbrich, R., Schölkopf, B., Logothetis, N.

(130), MPI for Biological Cybernetics, 2004 (techreport)

Abstract
We discuss reproducing kernel Hilbert space (RKHS)-based measures of statistical dependence, with emphasis on constrained covariance (COCO), a novel criterion to test dependence of random variables. We show that COCO is a test for independence if and only if the associated RKHSs are universal. That said, no independence test exists that can distinguish dependent and independent random variables in all circumstances. Dependent random variables can result in a COCO which is arbitrarily close to zero when the source densities are highly non-smooth, which can make dependence hard to detect empirically. All current kernel-based independence tests share this behaviour. Finally, we demonstrate exponential convergence between the population and empirical COCO, which implies that COCO does not suffer from slow learning rates when used as a dependence test.

PDF [BibTex]

PDF [BibTex]


no image
Early visual processing—data, theory, models

Wichmann, F.

Experimentelle Psychologie. Beitr{\"a}ge zur 46. Tagung experimentell arbeitender Psychologen, 46, pages: 24, 2004 (poster)

[BibTex]

[BibTex]


no image
Confidence Sets for Ratios: A Purely Geometric Approach To Fieller’s Theorem

von Luxburg, U., Franz, V.

(133), Max Planck Institute for Biological Cybernetics, 2004 (techreport)

Abstract
We present a simple, geometric method to construct Fieller's exact confidence sets for ratios of jointly normally distributed random variables. Contrary to previous geometric approaches in the literature, our method is valid in the general case where both sample mean and covariance are unknown. Moreover, not only the construction but also its proof are purely geometric and elementary, thus giving intuition into the nature of the confidence sets.

PDF [BibTex]

PDF [BibTex]


no image
Transductive Inference with Graphs

Zhou, D., Schölkopf, B.

Max Planck Institute for Biological Cybernetics, 2004, See the improved version Regularization on Discrete Spaces. (techreport)

Abstract
We propose a general regularization framework for transductive inference. The given data are thought of as a graph, where the edges encode the pairwise relationships among data. We develop discrete analysis and geometry on graphs, and then naturally adapt the classical regularization in the continuous case to the graph situation. A new and effective algorithm is derived from this general framework, as well as an approach we developed before.

[BibTex]

[BibTex]


no image
Kompetenzerwerb für Informationssysteme - Einfluss des Lernprozesses auf die Interaktion mit Fahrerinformationssystemen. Veröffentlichter Abschlussbericht (Förderkennzeichen BaSt FE 82.196/2001).

Totzke, I., Krüger, H., Hofmann, M., Meilinger, T., Rauch, N., Schmidt, G.

Interdisziplinäres Zentrum für Verkehrswissenschaften (IZVW), Würzburg, 2004 (techreport)

[BibTex]

[BibTex]


no image
Implicit Wiener series for capturing higher-order interactions in images

Franz, M., Schölkopf, B.

Sensory coding and the natural environment, (Editors: Olshausen, B.A. and M. Lewicki), 2004 (poster)

Abstract
The information about the objects in an image is almost exclusively described by the higher-order interactions of its pixels. The Wiener series is one of the standard methods to systematically characterize these interactions. However, the classical estimation method of the Wiener expansion coefficients via cross-correlation suffers from severe problems that prevent its application to high-dimensional and strongly nonlinear signals such as images. We propose an estimation method based on regression in a reproducing kernel Hilbert space that overcomes these problems using polynomial kernels as known from Support Vector Machines and other kernel-based methods. Numerical experiments show performance advantages in terms of convergence, interpretability and system sizes that can be handled. By the time of the conference, we will be able to present first results on the higher-order structure of natural images.

[BibTex]

[BibTex]


no image
Classification and Memory Behaviour of Man Revisited by Machine

Graf, A., Wichmann, F., Bülthoff, H., Schölkopf, B.

CSHL Meeting on Computational & Systems Neuroscience (COSYNE), 2004 (poster)

[BibTex]

[BibTex]

2002


no image
Real-Time Statistical Learning for Oculomotor Control and Visuomotor Coordination

Vijayakumar, S., Souza, A., Peters, J., Conradt, J., Rutkowski, T., Ijspeert, A., Nakanishi, J., Inoue, M., Shibata, T., Wiryo, A., Itti, L., Amari, S., Schaal, S.

(Editors: Becker, S. , S. Thrun, K. Obermayer), Sixteenth Annual Conference on Neural Information Processing Systems (NIPS), December 2002 (poster)

Web [BibTex]

2002

Web [BibTex]


no image
Surface-slant-from-texture discrimination: Effects of slant level and texture type

Rosas, P., Wichmann, F., Wagemans, J.

Journal of Vision, 2(7):300, Second Annual Meeting of the Vision Sciences Society (VSS), November 2002 (poster)

Abstract
The problem of surface-slant-from-texture was studied psychophysically by measuring the performances of five human subjects in a slant-discrimination task with a number of different types of textures: uniform lattices, randomly displaced lattices, polka dots, Voronoi tessellations, orthogonal sinusoidal plaid patterns, fractal or 1/f noise, “coherent” noise and a “diffusion-based” texture (leopard skin-like). The results show: (1) Improving performance with larger slants for all textures. (2) A “non-symmetrical” performance around a particular slant characterized by a psychometric function that is steeper in the direction of the more slanted orientation. (3) For sufficiently large slants (66 deg) there are no major differences in performance between any of the different textures. (4) For slants at 26, 37 and 53 degrees, however, there are marked differences between the different textures. (5) The observed differences in performance across textures for slants up to 53 degrees are systematic within subjects, and nearly so across them. This allows a rank-order of textures to be formed according to their “helpfulness” — that is, how easy the discrimination task is when a particular texture is mapped on the surface. Polka dots tended to allow the best slant discrimination performance, noise patterns the worst up to the large slant of 66 degrees at which performance was almost independent of the particular texture chosen. Finally, our large number of 2AFC trials (approximately 2800 trials per texture across subjects) and associated tight confidence intervals may enable us to find out about which statistical properties of the textures could be responsible for surface-slant-from-texture estimation, with the ultimate goal of being able to predict observer performance for any arbitrary texture.

Web DOI [BibTex]

Web DOI [BibTex]


no image
Modelling Contrast Transfer in Spatial Vision

Wichmann, F.

Journal of Vision, 2(10):7, Second Annual Meeting of the Vision Sciences Society (VSS), November 2002 (poster)

Abstract
Much of our information about spatial vision comes from detection experiments involving low-contrast stimuli. Contrast discrimination experiments provide one way to explore the visual system's response to stimuli of higher contrast, the results of which allow different models of contrast processing (e.g. energy versus gain-control models) to be critically assessed (Wichmann & Henning, 1999). Studies of detection and discrimination using pulse train stimuli in noise, on the other hand, make predictions about the number, position and properties of noise sources within the processing stream (Henning, Bird & Wichmann, 2002). Here I report modelling results combining data from both sinusoidal and pulse train experiments in and without noise to arrive at a more tightly constrained model of early spatial vision.

Web DOI [BibTex]

Web DOI [BibTex]


no image
Pulse train detection and discrimination in pink noise

Henning, G., Wichmann, F., Bird, C.

Journal of Vision, 2(7):229, Second Annual Meeting of the Vision Sciences Society (VSS), November 2002 (poster)

Abstract
Much of our information about spatial vision comes from detection experiments involving low-contrast stimuli. Contrast discrimination experiments provide one way to explore the visual system's response to stimuli of higher contrast. We explored both detection and contrast discrimination performance with sinusoidal and "pulse-train" (or line) gratings. Both types of grating had a fundamental spatial frequency of 2.09-c/deg but the pulse-train, ideally, contains, in addition to its fundamental component, all the harmonics of the fundamental. Although the 2.09-c/deg pulse-train produced on the display was measured and shown to contain at least 8 harmonics at equal contrast, it was no more detectable than its most detectable component; no benefit from having additional information at the harmonics was measurable. The addition of broadband "pink" noise, designed to equalize the detectability of the components of the pulse train, made it about a factor of four more detectable than any of its components. However, in contrast-discrimination experiments, with an in-phase pedestal or masking grating of the same form and phase as the signal and 15% contrast, the noise did not improve the discrimination performance of the pulse train relative to that of its sinusoidal components. In contrast, a 2.09-c/deg "super train," constructed to have 8 equally detectable harmonics, was a factor of five more detectable than any of its components. We discuss the implications of these observations for models of early vision in particular the implications for possible sources of internal noise.

Web DOI [BibTex]

Web DOI [BibTex]


no image
Kernel Dependency Estimation

Weston, J., Chapelle, O., Elisseeff, A., Schölkopf, B., Vapnik, V.

(98), Max Planck Institute for Biological Cybernetics, August 2002 (techreport)

Abstract
We consider the learning problem of finding a dependency between a general class of objects and another, possibly different, general class of objects. The objects can be for example: vectors, images, strings, trees or graphs. Such a task is made possible by employing similarity measures in both input and output spaces using kernel functions, thus embedding the objects into vector spaces. Output kernels also make it possible to encode prior information and/or invariances in the loss function in an elegant way. We experimentally validate our approach on several tasks: mapping strings to strings, pattern recognition, and reconstruction from partial images.

PDF [BibTex]

PDF [BibTex]


no image
Phase information in the recognition of natural images

Braun, D., Wichmann, F., Gegenfurtner, K.

Perception, 31(ECVP Abstract Supplement):133, 25th European Conference on Visual Perception, August 2002 (poster)

Abstract
Fourier phase plays an important role in determining global image structure. For example, when the phase spectrum of an image of a flower is swapped with that of a tank, we usually perceive a tank, even though the amplitude spectrum is still that of the flower. Similarly, when the phase spectrum of an image is randomly swapped across frequencies, that is its Fourier energy is randomly distributed over the image, the resulting image becomes impossible to recognise. Our goal was to evaluate the effect of phase manipulations in a quantitative manner. Subjects viewed two images of natural scenes, one of which contained an animal (the target) embedded in the background. The spectra of the images were manipulated by adding random phase noise at each frequency. The phase noise was the independent variable, uniformly distributed between 0° and ±180°. Subjects were remarkably resistant to phase noise. Even with ±120° noise, subjects were still 75% correct. The proportion of correct answers closely followed the correlation between original and noise-distorted images. Thus it appears as if it was not the global phase information per se that determines our percept of natural images, but rather the effect of phase on local image features.

Web [BibTex]

Web [BibTex]


no image
Global Geometry of SVM Classifiers

Zhou, D., Xiao, B., Zhou, H., Dai, R.

Max Planck Institute for Biological Cybernetics, Tübingen, Germany, June 2002 (techreport)

Abstract
We construct an geometry framework for any norm Support Vector Machine (SVM) classifiers. Within this framework, separating hyperplanes, dual descriptions and solutions of SVM classifiers are constructed by a purely geometric fashion. In contrast with the optimization theory used in SVM classifiers, we have no complicated computations any more. Each step in our theory is guided by elegant geometric intuitions.

PDF PostScript [BibTex]

PDF PostScript [BibTex]


no image
Computationally Efficient Face Detection

Romdhani, S., Torr, P., Schölkopf, B., Blake, A.

(MSR-TR-2002-69), Microsoft Research, June 2002 (techreport)

Web [BibTex]

Web [BibTex]


no image
Detection and discrimination in pink noise

Wichmann, F., Henning, G.

5, pages: 100, 5. T{\"u}binger Wahrnehmungskonferenz (TWK), February 2002 (poster)

Abstract
Much of our information about early spatial vision comes from detection experiments involving low-contrast stimuli, which are not, perhaps, particularly "natural" stimuli. Contrast discrimination experiments provide one way to explore the visual system's response to stimuli of higher contrast whilst keeping the number of unknown parameters comparatively small. We explored both detection and contrast discrimination performance with sinusoidal and "pulse-train" (or line) gratings. Both types of grating had a fundamental spatial frequency of 2.09-c/deg but the pulse-train, ideally, contains, in addition to its fundamental component, all the harmonics of the fundamental. Although the 2.09-c/deg pulse-train produced on our display was measured using a high-performance digital camera (Photometrics) and shown to contain at least 8 harmonics at equal contrast, it was no more detectable than its most detectable component; no benefit from having additional information at the harmonics was measurable. The addition of broadband 1-D "pink" noise made it about a factor of four more detectable than any of its components. However, in contrast-discrimination experiments, with an in-phase pedestal or masking grating of the same form and phase as the signal and 15% contrast, the noise did not improve the discrimination performance of the pulse train relative to that of its sinusoidal components. We discuss the implications of these observations for models of early vision in particular the implications for possible sources of internal noise.

Web [BibTex]

Web [BibTex]


no image
Kernel-based nonlinear blind source separation

Harmeling, S., Ziehe, A., Kawanabe, M., Müller, K.

EU-Project BLISS, January 2002 (techreport)

GZIP [BibTex]

GZIP [BibTex]


no image
A compression approach to support vector model selection

von Luxburg, U., Bousquet, O., Schölkopf, B.

(101), Max Planck Institute for Biological Cybernetics, 2002, see more detailed JMLR version (techreport)

Abstract
In this paper we investigate connections between statistical learning theory and data compression on the basis of support vector machine (SVM) model selection. Inspired by several generalization bounds we construct ``compression coefficients'' for SVMs, which measure the amount by which the training labels can be compressed by some classification hypothesis. The main idea is to relate the coding precision of this hypothesis to the width of the margin of the SVM. The compression coefficients connect well known quantities such as the radius-margin ratio R^2/rho^2, the eigenvalues of the kernel matrix and the number of support vectors. To test whether they are useful in practice we ran model selection experiments on several real world datasets. As a result we found that compression coefficients can fairly accurately predict the parameters for which the test error is minimized.

[BibTex]

[BibTex]


no image
Feature Selection and Transduction for Prediction of Molecular Bioactivity for Drug Design

Weston, J., Perez-Cruz, F., Bousquet, O., Chapelle, O., Elisseeff, A., Schölkopf, B.

Max Planck Institute for Biological Cybernetics / Biowulf Technologies, 2002 (techreport)

Web [BibTex]

Web [BibTex]


no image
Application of Monte Carlo Methods to Psychometric Function Fitting

Wichmann, F.

Proceedings of the 33rd European Conference on Mathematical Psychology, pages: 44, 2002 (poster)

Abstract
The psychometric function relates an observer's performance to an independent variable, usually some physical quantity of a stimulus in a psychophysical task. Here I describe methods to (1) fitting psychometric functions, (2) assessing goodness-of-fit, and (3) providing confidence intervals for the function's parameters and other estimates derived from them. First I describe a constrained maximum-likelihood method for parameter estimation. Using Monte-Carlo simulations I demonstrate that it is important to have a fitting method that takes stimulus-independent errors (or "lapses") into account. Second, a number of goodness-of-fit tests are introduced. Because psychophysical data sets are usually rather small I advocate the use of Monte Carlo resampling techniques that do not rely on asymptotic theory for goodness-of-fit assessment. Third, a parametric bootstrap is employed to estimate the variability of fitted parameters and derived quantities such as thresholds and slopes. I describe how the bootstrap bridging assumption, on which the validity of the procedure depends, can be tested without incurring too high a cost in computation time. Finally I describe how the methods can be extended to test hypotheses concerning the form and shape of several psychometric functions. Software describing the methods is available (http://www.bootstrap-software.com/psignifit/), as well as articles describing the methods in detail (Wichmann&Hill, Perception&Psychophysics, 2001a,b).

[BibTex]

[BibTex]


no image
Observations on the Nyström Method for Gaussian Process Prediction

Williams, C., Rasmussen, C., Schwaighofer, A., Tresp, V.

Max Planck Institute for Biological Cybernetics, Tübingen, Germany, 2002 (techreport)

Abstract
A number of methods for speeding up Gaussian Process (GP) prediction have been proposed, including the Nystr{\"o}m method of Williams and Seeger (2001). In this paper we focus on two issues (1) the relationship of the Nystr{\"o}m method to the Subset of Regressors method (Poggio and Girosi 1990; Luo and Wahba, 1997) and (2) understanding in what circumstances the Nystr{\"o}m approximation would be expected to provide a good approximation to exact GP regression.

PostScript [BibTex]

PostScript [BibTex]


no image
Optimal linear estimation of self-motion - a real-world test of a model of fly tangential neurons

Franz, MO.

SAB 02 Workshop, Robotics as theoretical biology, 7th meeting of the International Society for Simulation of Adaptive Behaviour (SAB), (Editors: Prescott, T.; Webb, B.), 2002 (poster)

Abstract
The tangential neurons in the fly brain are sensitive to the typical optic flow patterns generated during self-motion (see example in Fig.1). We examine whether a simplified linear model of these neurons can be used to estimate self-motion from the optic flow. We present a theory for the construction of an optimal linear estimator incorporating prior knowledge both about the distance distribution of the environment, and about the noise and self-motion statistics of the sensor. The optimal estimator is tested on a gantry carrying an omnidirectional vision sensor that can be moved along three translational and one rotational degree of freedom. The experiments indicate that the proposed approach yields accurate results for rotation estimates, independently of the current translation and scene layout. Translation estimates, however, turned out to be sensitive to simultaneous rotation and to the particular distance distribution of the scene. The gantry experiments confirm that the receptive field organization of the tangential neurons allows them, as an ensemble, to extract self-motion from the optic flow.

PDF [BibTex]

PDF [BibTex]

2000


no image
Contrast discrimination using periodic pulse trains

Wichmann, F., Henning, G.

pages: 74, 3. T{\"u}binger Wahrnehmungskonferenz (TWK), February 2000 (poster)

Abstract
Understanding contrast transduction is essential for understanding spatial vision. Previous research (Wichmann et al. 1998; Wichmann, 1999; Henning and Wichmann, 1999) has demonstrated the importance of high contrasts to distinguish between alternative models of contrast discrimination. However, the modulation transfer function of the eye imposes large contrast losses on stimuli, particularly for stimuli of high spatial frequency, making high retinal contrasts difficult to obtain using sinusoidal gratings. Standard 2AFC contrast discrimination experiments were conducted using periodic pulse trains as stimuli. Given our Mitsubishi display we achieve stimuli with up to 160% contrast at the fundamental frequency. The shape of the threshold versus (pedestal) contrast (TvC) curve using pulse trains shows the characteristic dipper shape, i.e. contrast discrimination is sometimes “easier” than detection. The rising part of the TvC function has the same slope as that measured for contrast discrimination using sinusoidal gratings of the same frequency as the fundamental. Periodic pulse trains offer the possibility to explore the visual system’s properties using high retinal contrasts. Thus they might prove useful in tasks other than contrast discrimination. Second, at least for high spatial frequencies (8 c/deg) it appears that contrast discrimination using sinusoids and periodic pulse trains results in virtually identical TvC functions, indicating a lack of probability summation. Further implications of these results are discussed.

Web [BibTex]

2000

Web [BibTex]


no image
Subliminale Darbietung verkehrsrelevanter Information in Kraftfahrzeugen

Staedtgen, M., Hahn, S., Franz, MO., Spitzer, M.

pages: 98, (Editors: H.H. Bülthoff, K.R. Gegenfurtner, H.A. Mallot), 3. T{\"u}binger Wahrnehmungskonferenz (TWK), February 2000 (poster)

Abstract
Durch moderne Bildverarbeitungstechnologien ist es m{\"o}glich, in Kraftfahrzeugen bestimmte kritische Verkehrssituationen automatisch zu erkennen und den Fahrer zu warnen bzw. zu informieren. Ein Problem ist dabei die Darbietung der Ergebnisse, die den Fahrer m{\"o}glichst wenig belasten und seine Aufmerksamkeit nicht durch zus{\"a}tzliche Warnleuchten oder akustische Signale vom Verkehrsgeschehen ablenken soll. In einer Reihe von Experimenten wurde deshalb untersucht, ob subliminal dargebotene, das heißt nicht bewußt wahrgenommene, verkehrsrelevante Informationen verhaltenswirksam werden und zur Informations{\"u}bermittlung an den Fahrer genutzt werden k{\"o}nnen. In einem Experiment zur semantischen Bahnung konnte mit Hilfe einer lexikalischen Entscheidungsaufgabe gezeigt werden, daß auf den Straßenverkehr bezogene Worte schneller verarbeitet werden, wenn vorher ein damit in Zusammenhang stehendes Bild eines Verkehrsschildes subliminal pr{\"a}sentiert wurde. Auch bei parafovealer Darbietung der subliminalen Stimuli wurde eine Beschleunigung erzielt. In einer visuellen Suchaufgabe wurden in Bildern realer Verkehrssituationen Verkehrszeichen schneller entdeckt, wenn das Bild des Verkehrszeichens vorher subliminal dargeboten wurde. In beiden Experimenten betrug die Pr{\"a}sentationszeit f{\"u}r die Hinweisreize 17 ms, zus{\"a}tzlich wurde durch Vorw{\"a}rts- und R{\"u}ckw{\"a}rtsmaskierung die bewußteWahrnehmung verhindert. Diese Laboruntersuchungen zeigten, daß sich auch im Kontext des Straßenverkehrs Beschleunigungen der Informationsverarbeitung durch subliminal dargebotene Stimuli erreichen lassen. In einem dritten Experiment wurde die Darbietung eines subliminalen Hinweisreizes auf die Reaktionszeit beim Bremsen in einem realen Fahrversuch untersucht. Die Versuchspersonen (n=17) sollten so schnell wie m{\"o}glich bremsen, wenn die Bremsleuchten eines im Abstand von 12-15 m voran fahrenden Fahrzeuges aufleuchteten. In 50 von insgesamt 100 Durchg{\"a}ngen wurde ein subliminaler Stimulus (zwei rote Punkte mit einem Zentimeter Durchmesser und zehn Zentimeter Abstand) 150 ms vor Aufleuchten der Bremslichter pr{\"a}sentiert. Die Darbietung erfolgte durch ein im Auto an Stelle des Tachometers integriertes TFT-LCD Display. Im Vergleich zur Reaktion ohne subliminalen Stimulus verk{\"u}rzte sich die Reaktionszeit dadurch signifikant um 51 ms. In den beschriebenen Experimenten konnte gezeigt werden, daß die subliminale Darbietung verkehrsrelevanter Information auch in Kraftfahrzeugen verhaltenswirksam werden kann. In Zukunft k{\"o}nnte durch die Kombination der online-Bildverarbeitung im Kraftfahrzeug mit subliminaler Darbietung der Ergebnisse eine Erh{\"o}hung der Verkehrssicherheit und des Komforts erreicht werden.

Web [BibTex]

Web [BibTex]


no image
The Kernel Trick for Distances

Schölkopf, B.

(MSR-TR-2000-51), Microsoft Research, Redmond, WA, USA, 2000 (techreport)

Abstract
A method is described which, like the kernel trick in support vector machines (SVMs), lets us generalize distance-based algorithms to operate in feature spaces, usually nonlinearly related to the input space. This is done by identifying a class of kernels which can be represented as normbased distances in Hilbert spaces. It turns out that common kernel algorithms, such as SVMs and kernel PCA, are actually really distance based algorithms and can be run with that class of kernels, too. As well as providing a useful new insight into how these algorithms work, the present work can form the basis for conceiving new algorithms.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Kernel method for percentile feature extraction

Schölkopf, B., Platt, J., Smola, A.

(MSR-TR-2000-22), Microsoft Research, 2000 (techreport)

Abstract
A method is proposed which computes a direction in a dataset such that a speci􏰘ed fraction of a particular class of all examples is separated from the overall mean by a maximal margin􏰤 The pro jector onto that direction can be used for class􏰣speci􏰘c feature extraction􏰤 The algorithm is carried out in a feature space associated with a support vector kernel function􏰢 hence it can be used to construct a large class of nonlinear fea􏰣 ture extractors􏰤 In the particular case where there exists only one class􏰢 the method can be thought of as a robust form of principal component analysis􏰢 where instead of variance we maximize percentile thresholds􏰤 Fi􏰣 nally􏰢 we generalize it to also include the possibility of specifying negative examples􏰤

PDF [BibTex]

PDF [BibTex]