26 April 2010 CULA: hybrid GPU accelerated linear algebra routines
Author Affiliations +
The modern graphics processing unit (GPU) found in many standard personal computers is a highly parallel math processor capable of nearly 1 TFLOPS peak throughput at a cost similar to a high-end CPU and an excellent FLOPS/watt ratio. High-level linear algebra operations are computationally intense, often requiring O(N3) operations and would seem a natural fit for the processing power of the GPU. Our work is on CULA, a GPU accelerated implementation of linear algebra routines. We present results from factorizations such as LU decomposition, singular value decomposition and QR decomposition along with applications like system solution and least squares. The GPU execution model featured by NVIDIA GPUs based on CUDA demands very strong parallelism, requiring between hundreds and thousands of simultaneous operations to achieve high performance. Some constructs from linear algebra map extremely well to the GPU and others map poorly. CPUs, on the other hand, do well at smaller order parallelism and perform acceptably during low-parallelism code segments. Our work addresses this via hybrid a processing model, in which the CPU and GPU work simultaneously to produce results. In many cases, this is accomplished by allowing each platform to do the work it performs most naturally.
© (2010) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
John R. Humphrey, John R. Humphrey, Daniel K. Price, Daniel K. Price, Kyle E. Spagnoli, Kyle E. Spagnoli, Aaron L. Paolini, Aaron L. Paolini, Eric J. Kelmelis, Eric J. Kelmelis, } "CULA: hybrid GPU accelerated linear algebra routines", Proc. SPIE 7705, Modeling and Simulation for Defense Systems and Applications V, 770502 (26 April 2010); doi: 10.1117/12.850538; https://doi.org/10.1117/12.850538

Back to Top