Previous work produced a parallel and moderately scalable wavefront sensor model as part of a larger integrated telescope model. This relied on traditional high performance computing (HPC) techniques using optimised C and MPI based parallelism to marry maximum performance with the productive high-level modelling environment of MATLAB. In the intervening period the computational power and flexibility offered by graphics processors (GPUs) has increased dramatically. This presents both new options in terms of the level of hardware required to perform simulations and also new capabilities in terms of the scope of such simulations. We present a discussion of the currently available approaches and test case performance results based on a port to a GPU platform.