Computer-generated holography is a computationally intensive process particularly well suited to the architecture of graphics processing units (GPUs). This work investigates the performance improvements achievable through utilization of a GPU for optimization of holograms via simulated annealing. Two examples are given; accelerated training of an optical correlator to accept or reject inputs over sets of varying sizes, followed by an investigation into optimization of a hologram to produce a desired complex distribution in a portion of the far field with varying resolutions. Specifically, results comparing a Quad-core 3.0-GHz CPU and an nVidia GTX260 GPU are presented, demonstrating performance improvements of up to 2400%. This work offers details on what steps have been taken to optimize the algorithm for both the CPU and GPU platforms, and may be of interest to those looking to utilize GPU hardware for scientific computation.