Coherent integration is central to extracting maximum signal-to-noise ratio (SNR) from optical interferometric
data, with post-processing being the most effective method. More sophisticated algorithms produce better results,
but also use more computing time, sometimes as much as several minutes of computing time per second of obser-
vation. As data volumes continue to increase, it is becoming impractical to transfer the data to a supercomputer.
To address this problem, we have explored using a General Purpose Graphics Processor (GPGPU) to perform
these computations on a local machine, exploiting the fact that the problem is, in principle, massively parallel.
In this paper, we discuss methods to optimize the fringe-tracking algorithm. In particular, we emphasize the
parameter extraction process, and describe implementations using both genetic algorithms and Powell’s method.
Using these methods, we were able to improve performance by a factor of 100.