We propose a commodity graphics processing units (GPUs)–based massively parallel efficient computation for spectral-spatial classification of hyperspectral images. The spectral-spatial classification framework is based on the marginal probability distribution which uses all of the information in the hyperspectral data. In this framework, first, the posterior class probability is modeled with discriminative random field in which the association potential is linked with a multinomial logistic regression (MLR) classifier and the interaction potential modeling the spatial information is linked to a Markov random field multilevel logistic (MLL) prior. Second, the maximizers of the posterior marginals are computed via the loopy belief propagation (LBP) method. In addition, the regressors of the multinominal logistic regression classifier are inferred by the logistic regression via variable splitting and augmented Lagrangian (LORSAL) algorithm. Although the spectral-spatial classification framework exhibited state-of-the-art accuracy performance with regard to similar approaches, its computational complexity is very high. We take advantage of the massively parallel computing capability of NVIDIA Tesla C2075 with the compute unified device architecture including a set of GPU-accelerated linear algebra libraries (CULA) to dramatically improve the computation speed of this hyperspectral image classification framework. The shared memory and the asynchronous transfer techniques are also used for further computationally efficient optimization. Real hyperspectral data sets collected by the National Aeronautics and Space Administration’s airborne visible infrared imaging spectrometer and the reflective optics system imaging spectrometer system are used for effectiveness evaluation. The results show that we achieved a speedup of 92-fold on LORSAL, 69-fold on MLR, 127-fold on MLL, 160-fold on LBP, and 73-fold on the whole spectral-spatial classification framework as compared with its single-core central processing unit counterpart, respectively.