Long-range video surveillance is usually limited by the wavefront aberrations caused by atmospheric turbulence, rather than by the quality of the imaging optics or sensor. These aberrations can be mitigated optically by adaptive optics, or corrected post detection by digital video processing. Video processing is preferred if the quality of the enhancement is acceptable, because the hardware is less expensive and has lower size, weight and power (SWaP). Several competing video processing solutions may be employed: speckle imaging with bispectrum processing, lucky imaging, geometric correction and blind deconvolution. Speckle imaging was originally developed for astronomy. It has subsequently been adapted for the more challenging problem of low altitude, slant path, imaging, where the atmosphere is denser and more turbulent. This paper considers a bispectrum-based video processing solution, called ATCOM, which was originally implemented on an i7 CPU and accelerated using a GPU by EM Photonics Ltd. The design has since been adapted in a joint venture with RFEL Ltd to produce a low SWaP implementation based around Xilinx’s Zynq 7045 allprogrammable system-on-a-chip (SoC). This system is called ATACAMA. Bispectrum processing is computationally expensive and, for both ATCOM and ATACAMA, a sub-region of the image must be processed to achieve operation at standard video frame rates. This paper considers how the design may be optimized to increase the size of this region, while maintaining high performance. Finally, use of Xilinx’s next-generation UltraScale+ multiprocessor SoC (MPSoC), which has an embedded Mali-400 GPU as well as an ARM CPU, is explored to further improve functionality.