27 February 2015 A real-time GPU implementation of the SIFT algorithm for large-scale video analysis tasks
Author Affiliations +
Abstract
The SIFT algorithm is one of the most popular feature extraction methods and therefore widely used in all sort of video analysis tasks like instance search and duplicate/ near-duplicate detection. We present an efficient GPU implementation of the SIFT descriptor extraction algorithm using CUDA. The major steps of the algorithm are presented and for each step we describe how to efficiently parallelize it massively, how to take advantage of the unique capabilities of the GPU like shared memory / texture memory and how to avoid or minimize common GPU performance pitfalls. We compare the GPU implementation with the reference CPU implementation in terms of runtime and quality and achieve a speedup factor of approximately 3 - 5 for SD and 5 - 6 for Full HD video with respect to a multi-threaded CPU implementation, allowing us to run the SIFT descriptor extraction algorithm in real-time on SD video. Furthermore, quality tests show that the GPU implementation gives the same quality as the reference CPU implementation from the HessSIFT library. We further describe the benefits of GPU-accelerated SIFT descriptor calculation for video analysis applications such as near-duplicate video detection.
© (2015) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Hannes Fassold, Jakub Rosner, "A real-time GPU implementation of the SIFT algorithm for large-scale video analysis tasks", Proc. SPIE 9400, Real-Time Image and Video Processing 2015, 940007 (27 February 2015); doi: 10.1117/12.2083201; https://doi.org/10.1117/12.2083201
PROCEEDINGS
8 PAGES


SHARE
Back to Top