Header logo is ei


2003


no image
Hierarchical Spatio-Temporal Morphable Models for Representation of complex movements for Imitation Learning

Ilg, W., Bakir, GH., Franz, MO., Giese, M.

In 11th International Conference on Advanced Robotics, (2):453-458, (Editors: Nunes, U., A. de Almeida, A. Bejczy, K. Kosuge and J.A.T. Machado), 11th International Conference on Advanced Robotics, January 2003 (inproceedings)

PDF [BibTex]

2003

PDF [BibTex]


no image
Modeling Data using Directional Distributions

Dhillon, I., Sra, S.

Univ. of Texas at Austin, January 2003 (techreport)

GZIP [BibTex]

GZIP [BibTex]


no image
Hyperkernels

Ong, CS., Smola, AJ., Williamson, RC.

In pages: 495-502, 2003 (inproceedings)

PDF [BibTex]

PDF [BibTex]


no image
An Introduction to Variable and Feature Selection.

Guyon, I., Elisseeff, A.

Journal of Machine Learning, 3, pages: 1157-1182, 2003 (article)

[BibTex]

[BibTex]


no image
Feature Selection for Support Vector Machines by Means of Genetic Algorithms

Fröhlich, H., Chapelle, O., Schölkopf, B.

In 15th IEEE International Conference on Tools with AI, pages: 142-148, 15th IEEE International Conference on Tools with AI, 2003 (inproceedings)

[BibTex]

[BibTex]


no image
Propagation of Uncertainty in Bayesian Kernel Models - Application to Multiple-Step Ahead Forecasting

Quiñonero-Candela, J., Girard, A., Larsen, J., Rasmussen, CE.

In IEEE International Conference on Acoustics, Speech and Signal Processing, 2, pages: 701-704, IEEE International Conference on Acoustics, Speech and Signal Processing, 2003 (inproceedings)

Abstract
The object of Bayesian modelling is the predictive distribution, which in a forecasting scenario enables improved estimates of forecasted values and their uncertainties. In this paper we focus on reliably estimating the predictive mean and variance of forecasted values using Bayesian kernel based models such as the Gaussian Process and the Relevance Vector Machine. We derive novel analytic expressions for the predictive mean and variance for Gaussian kernel shapes under the assumption of a Gaussian input distribution in the static case, and of a recursive Gaussian predictive density in iterative forecasting. The capability of the method is demonstrated for forecasting of time-series and compared to approximate methods.

PDF PostScript [BibTex]

PDF PostScript [BibTex]


no image
Unsupervised Clustering of Images using their Joint Segmentation

Seldin, Y., Starik, S., Werman, M.

In The 3rd International Workshop on Statistical and Computational Theories of Vision (SCTV 2003), pages: 1-24, 3rd International Workshop on Statistical and Computational Theories of Vision (SCTV), 2003 (inproceedings)

PDF Web [BibTex]

PDF Web [BibTex]


no image
A Note on Parameter Tuning for On-Line Shifting Algorithms

Bousquet, O.

Max Planck Institute for Biological Cybernetics, Tübingen, Germany, 2003 (techreport)

Abstract
In this short note, building on ideas of M. Herbster [2] we propose a method for automatically tuning the parameter of the FIXED-SHARE algorithm proposed by Herbster and Warmuth [3] in the context of on-line learning with shifting experts. We show that this can be done with a memory requirement of $O(nT)$ and that the additional loss incurred by the tuning is the same as the loss incurred for estimating the parameter of a Bernoulli random variable.

PDF PostScript [BibTex]

PDF PostScript [BibTex]


no image
Dynamics of a rigid body in a Stokes fluid

Gonzalez, O., Graf, ABA., Maddocks, JH.

Journal of Fluid Mechanics, 2003 (article) Accepted

[BibTex]

[BibTex]


no image
A novel transient heater-foil technique for liquid crystal experiments on film cooled surfaces

Vogel, G., Graf, ABA., von Wolfersdorf, J., Weigand, B.

ASME Journal of Turbomachinery, 125, pages: 529-537, 2003 (article)

PDF [BibTex]

PDF [BibTex]


no image
Support Vector Machines

Schölkopf, B., Smola, A.

In Handbook of Brain Theory and Neural Networks (2nd edition), pages: 1119-1125, (Editors: MA Arbib), MIT Press, Cambridge, MA, USA, 2003 (inbook)

[BibTex]

[BibTex]


no image
Prediction at an Uncertain Input for Gaussian Processes and Relevance Vector Machines - Application to Multiple-Step Ahead Time-Series Forecasting

Quiñonero-Candela, J., Girard, A., Rasmussen, C.

(IMM-2003-18), Max Planck Institute for Biological Cybernetics, Tübingen, Germany, 2003 (techreport)

PDF PostScript [BibTex]

PDF PostScript [BibTex]


no image
Large margin Methods in Label Sequence Learning

Altun, Y.

Brown University, Providence, RI, USA, 2003 (mastersthesis)

[BibTex]

[BibTex]


no image
Kernel Methods and Their Applications to Signal Processing

Bousquet, O., Perez-Cruz, F.

In Proceedings. (ICASSP ‘03), Special Session on Kernel Methods, pages: 860 , ICASSP, 2003 (inproceedings)

Abstract
Recently introduced in Machine Learning, the notion of kernels has drawn a lot of interest as it allows to obtain non-linear algorithms from linear ones in a simple and elegant manner. This, in conjunction with the introduction of new linear classification methods such as the Support Vector Machines has produced significant progress. The successes of such algorithms is now spreading as they are applied to more and more domains. Many Signal Processing problems, by their non-linear and high-dimensional nature may benefit from such techniques. We give an overview of kernel methods and their recent applications.

PDF PostScript [BibTex]

PDF PostScript [BibTex]


no image
Predictive control with Gaussian process models

Kocijan, J., Murray-Smith, R., Rasmussen, CE., Likar, B.

In Proceedings of IEEE Region 8 Eurocon 2003: Computer as a Tool, pages: 352-356, (Editors: Zajc, B. and M. Tkal), Proceedings of IEEE Region 8 Eurocon: Computer as a Tool, 2003 (inproceedings)

Abstract
This paper describes model-based predictive control based on Gaussian processes.Gaussian process models provide a probabilistic non-parametric modelling approach for black-box identification of non-linear dynamic systems. It offers more insight in variance of obtained model response, as well as fewer parameters to determine than other models. The Gaussian processes can highlight areas of the input space where prediction quality is poor, due to the lack of data or its complexity, by indicating the higher variance around the predicted mean. This property is used in predictive control, where optimisation of control signal takes the variance information into account. The predictive control principle is demonstrated on a simulated example of nonlinear system.

PDF PostScript [BibTex]

PDF PostScript [BibTex]


no image
Extension of the nu-SVM range for classification

Perez-Cruz, F., Weston, J., Herrmann, D., Schölkopf, B.

In Advances in Learning Theory: Methods, Models and Applications, NATO Science Series III: Computer and Systems Sciences, Vol. 190, 190, pages: 179-196, NATO Science Series III: Computer and Systems Sciences, (Editors: J Suykens and G Horvath and S Basu and C Micchelli and J Vandewalle), IOS Press, Amsterdam, 2003 (inbook)

[BibTex]

[BibTex]


no image
m-Alternative Forced Choice—Improving the Efficiency of the Method of Constant Stimuli

Jäkel, F.

Biologische Kybernetik, Graduate School for Neural and Behavioural Sciences, Tübingen, 2003 (diplomathesis)

[BibTex]

[BibTex]


no image
Microarrays: How Many Do You Need?

Zien, A., Fluck, J., Zimmer, R., Lengauer, T.

Journal of Computational Biology, 10(3-4):653-667, 2003 (article)

Abstract
We estimate the number of microarrays that is required in order to gain reliable results from a common type of study: the pairwise comparison of different classes of samples. We show that current knowledge allows for the construction of models that look realistic with respect to searches for individual differentially expressed genes and derive prototypical parameters from real data sets. Such models allow investigation of the dependence of the required number of samples on the relevant parameters: the biological variability of the samples within each class, the fold changes in expression that are desired to be detected, the detection sensitivity of the microarrays, and the acceptable error rates of the results. We supply experimentalists with general conclusions as well as a freely accessible Java applet at www.scai.fhg.de/special/bio/howmanyarrays/ for fine tuning simulations to their particular settings.

Web [BibTex]

Web [BibTex]


no image
New Approaches to Statistical Learning Theory

Bousquet, O.

Annals of the Institute of Statistical Mathematics, 55(2):371-389, 2003 (article)

Abstract
We present new tools from probability theory that can be applied to the analysis of learning algorithms. These tools allow to derive new bounds on the generalization performance of learning algorithms and to propose alternative measures of the complexity of the learning task, which in turn can be used to derive new learning algorithms.

PostScript [BibTex]

PostScript [BibTex]


no image
Distance-based classification with Lipschitz functions

von Luxburg, U., Bousquet, O.

In Learning Theory and Kernel Machines, Proceedings of the 16th Annual Conference on Computational Learning Theory, pages: 314-328, (Editors: Schölkopf, B. and M.K. Warmuth), Learning Theory and Kernel Machines, Proceedings of the 16th Annual Conference on Computational Learning Theory, 2003 (inproceedings)

Abstract
The goal of this article is to develop a framework for large margin classification in metric spaces. We want to find a generalization of linear decision functions for metric spaces and define a corresponding notion of margin such that the decision function separates the training points with a large margin. It will turn out that using Lipschitz functions as decision functions, the inverse of the Lipschitz constant can be interpreted as the size of a margin. In order to construct a clean mathematical setup we isometrically embed the given metric space into a Banach space and the space of Lipschitz functions into its dual space. Our approach leads to a general large margin algorithm for classification in metric spaces. To analyze this algorithm, we first prove a representer theorem. It states that there exists a solution which can be expressed as linear combination of distances to sets of training points. Then we analyze the Rademacher complexity of some Lipschitz function classes. The generality of the Lipschitz approach can be seen from the fact that several well-known algorithms are special cases of the Lipschitz algorithm, among them the support vector machine, the linear programming machine, and the 1-nearest neighbor classifier.

PDF PostScript [BibTex]

PDF PostScript [BibTex]


no image
An Introduction to Support Vector Machines

Schölkopf, B.

In Recent Advances and Trends in Nonparametric Statistics , pages: 3-17, (Editors: MG Akritas and DN Politis), Elsevier, Amsterdam, The Netherlands, 2003 (inbook)

Web DOI [BibTex]

Web DOI [BibTex]


no image
Statistical Learning and Kernel Methods in Bioinformatics

Schölkopf, B., Guyon, I., Weston, J.

In Artificial Intelligence and Heuristic Methods in Bioinformatics, 183, pages: 1-21, 3, (Editors: P Frasconi und R Shamir), IOS Press, Amsterdam, The Netherlands, 2003 (inbook)

[BibTex]

[BibTex]


no image
Interactive Images

Toyama, K., Schölkopf, B.

(MSR-TR-2003-64), Microsoft Research, Cambridge, UK, 2003 (techreport)

Abstract
Interactive Images are a natural extension of three recent developments: digital photography, interactive web pages, and browsable video. An interactive image is a multi-dimensional image, displayed two dimensions at a time (like a standard digital image), but with which a user can interact to browse through the other dimensions. One might consider a standard video sequence viewed with a video player as a simple interactive image with time as the third dimension. Interactive images are a generalization of this idea, in which the third (and greater) dimensions may be focus, exposure, white balance, saturation, and other parameters. Interaction is handled via a variety of modes including those we call ordinal, pixel-indexed, cumulative, and comprehensive. Through exploration of three novel forms of interactive images based on color, exposure, and focus, we will demonstrate the compelling nature of interactive images.

Web [BibTex]

Web [BibTex]


no image
Semi-Supervised Learning through Principal Directions Estimation

Chapelle, O., Schölkopf, B., Weston, J.

In ICML Workshop, The Continuum from Labeled to Unlabeled Data in Machine Learning & Data Mining, pages: 7, ICML Workshop: The Continuum from Labeled to Unlabeled Data in Machine Learning & Data Mining, 2003 (inproceedings)

Abstract
We describe methods for taking into account unlabeled data in the training of a kernel-based classifier, such as a Support Vector Machines (SVM). We propose two approaches utilizing unlabeled points in the vicinity of labeled ones. Both of the approaches effectively modify the metric of the pattern space, either by using non-spherical Gaussian density estimates which are determined using EM, or by modifying the kernel function using displacement vectors computed from pairs of unlabeled and labeled points. The latter is linked to techniques for training invariant SVMs. We present experimental results indicating that the proposed technique can lead to substantial improvements of classification accuracy.

PostScript [BibTex]

PostScript [BibTex]


no image
Statistical Learning and Kernel Methods

Navia-Vázquez, A., Schölkopf, B.

In Adaptivity and Learning—An Interdisciplinary Debate, pages: 161-186, (Editors: R.Kühn and R Menzel and W Menzel and U Ratsch and MM Richter and I-O Stamatescu), Springer, Berlin, Heidelberg, Germany, 2003 (inbook)

[BibTex]

[BibTex]


no image
A Short Introduction to Learning with Kernels

Schölkopf, B., Smola, A.

In Proceedings of the Machine Learning Summer School, Lecture Notes in Artificial Intelligence, Vol. 2600, pages: 41-64, LNAI 2600, (Editors: S Mendelson and AJ Smola), Springer, Berlin, Heidelberg, Germany, 2003 (inbook)

[BibTex]

[BibTex]


no image
Bayesian Kernel Methods

Smola, A., Schölkopf, B.

In Advanced Lectures on Machine Learning, Machine Learning Summer School 2002, Lecture Notes in Computer Science, Vol. 2600, LNAI 2600, pages: 65-117, 0, (Editors: S Mendelson and AJ Smola), Springer, Berlin, Germany, 2003 (inbook)

DOI [BibTex]

DOI [BibTex]


no image
Gene expression in chondrocytes assessed with use of microarrays

Aigner, T., Zien, A., Hanisch, D., Zimmer, R.

Journal of Bone and Joint Surgery, 85(Suppl 2):117-123, 2003 (article)

[BibTex]

[BibTex]


no image
Machine Learning with Hyperkernels

Ong, CS., Smola, AJ.

In pages: 568-575, 2003 (inproceedings)

PDF [BibTex]

PDF [BibTex]


no image
Gaussian Processes to Speed up Hybrid Monte Carlo for Expensive Bayesian Integrals

Rasmussen, CE.

In Bayesian Statistics 7, pages: 651-659, (Editors: J. M. Bernardo, M. J. Bayarri, J. O. Berger, A. P. Dawid, D. Heckerman, A. F. M. Smith and M. West), Bayesian Statistics 7, 2003 (inproceedings)

Abstract
Hybrid Monte Carlo (HMC) is often the method of choice for computing Bayesian integrals that are not analytically tractable. However the success of this method may require a very large number of evaluations of the (un-normalized) posterior and its partial derivatives. In situations where the posterior is computationally costly to evaluate, this may lead to an unacceptable computational load for HMC. I propose to use a Gaussian Process model of the (log of the) posterior for most of the computations required by HMC. Within this scheme only occasional evaluation of the actual posterior is required to guarantee that the samples generated have exactly the desired distribution, even if the GP model is somewhat inaccurate. The method is demonstrated on a 10 dimensional problem, where 200 evaluations suffice for the generation of 100 roughly independent points from the posterior. Thus, the proposed scheme allows Bayesian treatment of models with posteriors that are computationally demanding, such as models involving computer simulation.

PDF PostScript Web [BibTex]

PDF PostScript Web [BibTex]


no image
Dimension Reduction Based on Orthogonality — a Decorrelation Method in ICA

Zhang, K., Chan, L.

In Artificial Neural Networks and Neural Information Processing - ICANN/ICONIP 2003, pages: 132-139, (Editors: O Kaynak and E Alpaydin and E Oja and L Xu), Springer, Berlin, Germany, International Conference on Artificial Neural Networks and International Conference on Neural Information Processing, ICANN/ICONIP, 2003, Lecture Notes in Computer Science, Volume 2714 (inproceedings)

Web DOI [BibTex]

Web DOI [BibTex]


no image
Stability of ensembles of kernel machines

Elisseeff, A., Pontil, M.

In 190, pages: 111-124, NATO Science Series III: Computer and Systems Science, (Editors: Suykens, J., G. Horvath, S. Basu, C. Micchelli and J. Vandewalle), IOS press, Netherlands, 2003 (inbook)

[BibTex]

[BibTex]


no image
Models of contrast transfer as a function of presentation time and spatial frequency.

Wichmann, F.

2003 (poster)

Abstract
Understanding contrast transduction is essential for understanding spatial vision. Using standard 2AFC contrast discrimination experiments conducted using a carefully calibrated display we previously showed that the shape of the threshold versus (pedestal) contrast (TvC) curve changes with presentation time and the performance level defined as threshold (Wichmann, 1999; Wichmann & Henning, 1999). Additional experiments looked at the change of the TvC curve with spatial frequency (Bird, Henning & Wichmann, 2002), and at how to constrain the parameters of models of contrast processing (Wichmann, 2002). Here I report modelling results both across spatial frequency and presentation time. An extensive model-selection exploration was performed using Bayesian confidence regions for the fitted parameters as well as cross-validation methods. Bird, C.M., G.B. Henning and F.A. Wichmann (2002). Contrast discrimination with sinusoidal gratings of different spatial frequency. Journal of the Optical Society of America A, 19, 1267-1273. Wichmann, F.A. (1999). Some aspects of modelling human spatial vision: contrast discrimination. Unpublished doctoral dissertation, The University of Oxford. Wichmann, F.A. & Henning, G.B. (1999). Implications of the Pedestal Effect for Models of Contrast-Processing and Gain-Control. OSA Annual Meeting Program, 62. Wichmann, F.A. (2002). Modelling Contrast Transfer in Spatial Vision [Abstract]. Journal of Vision, 2, 7a.

[BibTex]

2001


no image
Pattern Selection Using the Bias and Variance of Ensemble

Shin, H., Cho, S.

In Proc. of the Korean Data Mining Conference, pages: 56-67, Korean Data Mining Conference, December 2001 (inproceedings)

[BibTex]

2001

[BibTex]


no image
Separation of post-nonlinear mixtures using ACE and temporal decorrelation

Ziehe, A., Kawanabe, M., Harmeling, S., Müller, K.

In ICA 2001, pages: 433-438, (Editors: Lee, T.-W. , T.P. Jung, S. Makeig, T. J. Sejnowski), Third International Workshop on Independent Component Analysis and Blind Signal Separation, December 2001 (inproceedings)

Abstract
We propose an efficient method based on the concept of maximal correlation that reduces the post-nonlinear blind source separation problem (PNL BSS) to a linear BSS problem. For this we apply the Alternating Conditional Expectation (ACE) algorithm – a powerful technique from nonparametric statistics – to approximately invert the (post-)nonlinear functions. Interestingly, in the framework of the ACE method convergence can be proven and in the PNL BSS scenario the optimal transformation found by ACE will coincide with the desired inverse functions. After the nonlinearities have been removed by ACE, temporal decorrelation (TD) allows us to recover the source signals. An excellent performance underlines the validity of our approach and demonstrates the ACE-TD method on realistic examples.

PDF [BibTex]

PDF [BibTex]


no image
Perception of Planar Shapes in Depth

Wichmann, F., Willems, B., Rosas, P., Wagemans, J.

Journal of Vision, 1(3):176, First Annual Meeting of the Vision Sciences Society (VSS), December 2001 (poster)

Abstract
We investigated the influence of the perceived 3D-orientation of planar elliptical shapes on the perception of the shapes themselves. Ellipses were projected onto the surface of a sphere and subjects were asked to indicate if the projected shapes looked as if they were a circle on the surface of the sphere. The image of the sphere was obtained from a real, (near) perfect sphere using a highly accurate digital camera (real sphere diameter 40 cm; camera-to-sphere distance 320 cm; for details see Willems et al., Perception 29, S96, 2000; Photometrics SenSys 400 digital camera with Rodenstock lens, 12-bit linear luminance resolution). Stimuli were presented monocularly on a carefully linearized Sony GDM-F500 monitor keeping the scene geometry as in the real case (sphere diameter on screen 8.2 cm; viewing distance 66 cm). Experiments were run in a darkened room using a viewing tube to minimize, as far as possible, extraneous monocular cues to depth. Three different methods were used to obtain subjects' estimates of 3D-shape: the method of adjustment, temporal 2-alternative forced choice (2AFC) and yes/no. Several results are noteworthy. First, mismatch between perceived and objective slant tended to decrease with increasing objective slant. Second, the variability of the settings, too, decreased with increasing objective slant. Finally, we comment on the results obtained using different psychophysical methods and compare our results to those obtained using a real sphere and binocular vision (Willems et al.).

Web DOI [BibTex]

Web DOI [BibTex]


no image
Anabolic and Catabolic Gene Expression Pattern Analysis in Normal Versus Osteoarthritic Cartilage Using Complementary DNA-Array Technology

Aigner, T., Zien, A., Gehrsitz, A., Gebhard, P., McKenna, L.

Arthritis and Rheumatism, 44(12):2777-2789, December 2001 (article)

Web [BibTex]

Web [BibTex]


no image
Nonlinear blind source separation using kernel feature spaces

Harmeling, S., Ziehe, A., Kawanabe, M., Blankertz, B., Müller, K.

In ICA 2001, pages: 102-107, (Editors: Lee, T.-W. , T.P. Jung, S. Makeig, T. J. Sejnowski), Third International Workshop on Independent Component Analysis and Blind Signal Separation, December 2001 (inproceedings)

Abstract
In this work we propose a kernel-based blind source separation (BSS) algorithm that can perform nonlinear BSS for general invertible nonlinearities. For our kTDSEP algorithm we have to go through four steps: (i) adapting to the intrinsic dimension of the data mapped to feature space F, (ii) finding an orthonormal basis of this submanifold, (iii) mapping the data into the subspace of F spanned by this orthonormal basis, and (iv) applying temporal decorrelation BSS (TDSEP) to the mapped data. After demixing we get a number of irrelevant components and the original sources. To find out which ones are the components of interest, we propose a criterion that allows to identify the original sources. The excellent performance of kTDSEP is demonstrated in experiments on nonlinearly mixed speech data.

PDF [BibTex]

PDF [BibTex]


no image
Pattern Selection for ‘Regression’ using the Bias and Variance of Ensemble Network

Shin, H., Cho, S.

In Proc. of the Korean Institute of Industrial Engineers Conference, pages: 10-19, Korean Industrial Engineers Conference, November 2001 (inproceedings)

[BibTex]

[BibTex]


no image
Kernel Methods for Extracting Local Image Semantics

Bradshaw, B., Schölkopf, B., Platt, J.

(MSR-TR-2001-99), Microsoft Research, October 2001 (techreport)

Web [BibTex]

Web [BibTex]


no image
Pattern Selection for ‘Classification’ using the Bias and Variance of Ensemble Neural Network

Shin, H., Cho, S.

In Proc. of the Korea Information Science Conference, pages: 307-309, Korea Information Science Conference, October 2001, Best Paper Award (inproceedings)

[BibTex]

[BibTex]


no image
Generalization performance of regularization networks and support vector machines via entropy numbers of compact operators

Williamson, R., Smola, A., Schölkopf, B.

IEEE Transactions on Information Theory, 47(6):2516-2532, September 2001 (article)

Abstract
We derive new bounds for the generalization error of kernel machines, such as support vector machines and related regularization networks by obtaining new bounds on their covering numbers. The proofs make use of a viewpoint that is apparently novel in the field of statistical learning theory. The hypothesis class is described in terms of a linear operator mapping from a possibly infinite-dimensional unit ball in feature space into a finite-dimensional space. The covering numbers of the class are then determined via the entropy numbers of the operator. These numbers, which characterize the degree of compactness of the operator can be bounded in terms of the eigenvalues of an integral operator induced by the kernel function used by the machine. As a consequence, we are able to theoretically explain the effect of the choice of kernel function on the generalization performance of support vector machines.

DOI [BibTex]

DOI [BibTex]


no image
Hybrid IDM/Impedance learning in human movements

Burdet, E., Teng, K., Chew, C., Peters, J., , B.

In ISHF 2001, 1, pages: 1-9, 1st International Symposium on Measurement, Analysis and Modeling of Human Functions (ISHF2001), September 2001 (inproceedings)

Abstract
In spite of motor output variability and the delay in the sensori-motor, humans routinely perform intrinsically un- stable tasks. The hybrid IDM/impedance learning con- troller presented in this paper enables skilful performance in strong stable and unstable environments. It consid- ers motor output variability identified from experimen- tal data, and contains two modules concurrently learning the endpoint force and impedance adapted to the envi- ronment. The simulations suggest how humans learn to skillfully perform intrinsically unstable tasks. Testable predictions are proposed.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Calibration of Digital Amateur Cameras

Urbanek, M., Horaud, R., Sturm, P.

(RR-4214), INRIA Rhone Alpes, Montbonnot, France, July 2001 (techreport)

Web [BibTex]

Web [BibTex]


no image
Combining Off- and On-line Calibration of a Digital Camera

Urbanek, M., Horaud, R., Sturm, P.

In In Proceedings of Third International Conference on 3-D Digital Imaging and Modeling, pages: 99-106, In Proceedings of Third International Conference on 3-D Digital Imaging and Modeling, June 2001 (inproceedings)

Abstract
We introduce a novel outlook on the self­calibration task, by considering images taken by a camera in motion, allowing for zooming and focusing. Apart from the complex relationship between the lens control settings and the intrinsic camera parameters, a prior off­line calibration allows to neglect the setting of focus, and to fix the principal point and aspect ratio throughout distinct views. Thus, the calibration matrix is dependent only on the zoom position. Given a fully calibrated reference view, one has only one parameter to estimate for any other view of the same scene, in order to calibrate it and to be able to perform metric reconstructions. We provide a close­form solution, and validate the reliability of the algorithm with experiments on real images. An important advantage of our method is a reduced ­ to one ­ number of critical camera configurations, associated with it. Moreover, we propose a method for computing the epipolar geometry of two views, taken from different positions and with different (spatial) resolutions; the idea is to take an appropriate third view, that is "easy" to match with the other two.

ZIP [BibTex]

ZIP [BibTex]


no image
Centralization: A new method for the normalization of gene expression data

Zien, A., Aigner, T., Zimmer, R., Lengauer, T.

Bioinformatics, 17, pages: S323-S331, June 2001, Mathematical supplement available at http://citeseer.ist.psu.edu/574280.html (article)

Abstract
Microarrays measure values that are approximately proportional to the numbers of copies of different mRNA molecules in samples. Due to technical difficulties, the constant of proportionality between the measured intensities and the numbers of mRNA copies per cell is unknown and may vary for different arrays. Usually, the data are normalized (i.e., array-wise multiplied by appropriate factors) in order to compensate for this effect and to enable informative comparisons between different experiments. Centralization is a new two-step method for the computation of such normalization factors that is both biologically better motivated and more robust than standard approaches. First, for each pair of arrays the quotient of the constants of proportionality is estimated. Second, from the resulting matrix of pairwise quotients an optimally consistent scaling of the samples is computed.

PDF PostScript Web [BibTex]

PDF PostScript Web [BibTex]


no image
Regularized principal manifolds

Smola, A., Mika, S., Schölkopf, B., Williamson, R.

Journal of Machine Learning Research, 1, pages: 179-209, June 2001 (article)

Abstract
Many settings of unsupervised learning can be viewed as quantization problems - the minimization of the expected quantization error subject to some restrictions. This allows the use of tools such as regularization from the theory of (supervised) risk minimization for unsupervised learning. This setting turns out to be closely related to principal curves, the generative topographic map, and robust coding. We explore this connection in two ways: (1) we propose an algorithm for finding principal manifolds that can be regularized in a variety of ways; and (2) we derive uniform convergence bounds and hence bounds on the learning rates of the algorithm. In particular, we give bounds on the covering numbers which allows us to obtain nearly optimal learning rates for certain types of regularization operators. Experimental results demonstrate the feasibility of the approach.

PDF [BibTex]

PDF [BibTex]


no image
Variationsverfahren zur Untersuchung von Grundzustandseigenschaften des Ein-Band Hubbard-Modells

Eichhorn, J.

Biologische Kybernetik, Technische Universität Dresden, Dresden/Germany, May 2001 (diplomathesis)

Abstract
Using different modifications of a new variational approach, statical groundstate properties of the one-band Hubbard model such as energy and staggered magnetisation are calculated. By taking into account additional fluctuations, the method ist gradually improved so that a very good description of the energy in one and two dimensions can be achieved. After a detailed discussion of the application in one dimension, extensions for two dimensions are introduced. By use of a modified version of the variational ansatz in particular a description of the quantum phase transition for the magnetisation should be possible.

PostScript [BibTex]

PostScript [BibTex]