In recent years, Generative Adversarial Network (GAN) has received much attention in the field of machine learning. It is an unsupervised learning model which is widely used in image, video, voice, etc. Based on GAN's two-man zero-sum game theory, the researchers proposed excellent variant algorithms such as deep convolutional GAN(DCGAN), Conditional GAN(CGAN), Least Squares GAN(LSGAN), and Boundary Equilibrium GAN (BEGAN), which has gradually overcome the problem of training imbalance and model collapse. However, the time efficiency of model training has always been a challenging problem. This paper proposes a GAN algorithm based on GPU parallel acceleration, which utilizes the powerful computing power of GPU and the advantages of multi-parallel computing, greatly reduces the time of model training, improves the training efficiency of GAN model, and achieves better modeling performance. Finally, we used the LSUN public scene dataset and the TIMIT public voice dataset to evaluate the proposed algorithm and compare it with the traditional GAN, DCGAN, LSGAN, and BEGAN algorithms. The experiment has fully proved the time advantage of the model training of the algorithm introduced in this paper.