14 February 2015 Memory-efficient large-scale linear support vector machine
Author Affiliations +
Proceedings Volume 9445, Seventh International Conference on Machine Vision (ICMV 2014); 944527 (2015) https://doi.org/10.1117/12.2180925
Event: Seventh International Conference on Machine Vision (ICMV 2014), 2014, Milan, Italy
Stochastic gradient descent has been advanced as a computationally efficient method for large-scale problems. In classification problems, many proposed linear support vector machines are very effective. However, they assume that the data is already in memory which might be not always the case. Recent work suggests a classical method that divides such a problem into smaller blocks then solves the sub-problems iteratively. We show that a simple modification of shrinking the dataset early will produce significant saving in computation and memory. We further find that on problems larger than previously considered, our approach is able to reach solutions on top-end desktop machines while competing methods cannot.
© (2015) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Abdullah Alrajeh, Abdullah Alrajeh, Akiko Takeda, Akiko Takeda, Mahesan Niranjan, Mahesan Niranjan, } "Memory-efficient large-scale linear support vector machine", Proc. SPIE 9445, Seventh International Conference on Machine Vision (ICMV 2014), 944527 (14 February 2015); doi: 10.1117/12.2180925; https://doi.org/10.1117/12.2180925


Evolving recurrent perceptrons
Proceedings of SPIE (August 18 1993)
Do Thesauri enhance rule-based categorization for OCR text?
Proceedings of SPIE (January 12 2003)
Evolving neural network architecture
Proceedings of SPIE (December 15 1992)
Application of evolutionary computation in ECAD problems
Proceedings of SPIE (October 12 1998)

Back to Top