Header logo is ei

Efficient Large Scale Linear Programming Support Vector Machines


Conference Paper


This paper presents a decomposition method for efficiently constructing ℓ1-norm Support Vector Machines (SVMs). The decomposition algorithm introduced in this paper possesses many desirable properties. For example, it is provably convergent, scales well to large datasets, is easy to implement, and can be extended to handle support vector regression and other SVM variants. We demonstrate the efficiency of our algorithm by training on (dense) synthetic datasets of sizes up to 20 million points (in ℝ32). The results show our algorithm to be several orders of magnitude faster than a previously published method for the same task. We also present experimental results on real data sets—our method is seen to be not only very fast, but also highly competitive against the leading SVM implementations.

Author(s): Sra, S.
Book Title: ECML 2006
Journal: Machine Learning: ECML 2006
Pages: 767-774
Year: 2006
Month: September
Day: 0
Editors: F{\"u}rnkranz, J. , T. Scheffer, M. Spiliopoulou
Publisher: Springer

Department(s): Empirical Inference
Bibtex Type: Conference Paper (inproceedings)

DOI: 10.1007/11871842_78
Event Name: 17th European Conference on Machine Learning
Event Place: Berlin, Germany

Address: Berlin, Germany
Digital: 0
Language: en
Organization: Max-Planck-Gesellschaft
School: Biologische Kybernetik

Links: Web


  title = {Efficient Large Scale Linear Programming Support Vector Machines},
  author = {Sra, S.},
  journal = {Machine Learning: ECML 2006},
  booktitle = {ECML 2006},
  pages = {767-774},
  editors = {F{\"u}rnkranz, J. , T. Scheffer, M. Spiliopoulou},
  publisher = {Springer},
  organization = {Max-Planck-Gesellschaft},
  school = {Biologische Kybernetik},
  address = {Berlin, Germany},
  month = sep,
  year = {2006},
  month_numeric = {9}