Translator Disclaimer
31 May 2013 Hardware acceleration of lucky-region fusion (LRF) algorithm for image acquisition and processing
Author Affiliations +
“Lucky-region fusion” (LRF) is an image processing technique that has proven successful in enhancing the quality of images distorted by atmospheric turbulence. The LRF algorithm extracts sharp regions of an image obtained from a series of short exposure frames, and “fuses” them into a final image with improved quality. In previous research, the LRF algorithm had been implemented on a PC using a compiled programming language. However, the PC usually does not have sufficient processing power to handle real-time extraction, processing and reduction required when the LRF algorithm is applied not to single picture images but rather to real-time video from fast, high-resolution image sensors. This paper describes a hardware implementation of the LRF algorithm on a Virtex 6 field programmable gate array (FPGA) to achieve real-time video processing. The novelty in our approach is the creation of a “black box” LRF video processing system with a standard camera link input, a user controller interface, and a standard camera link output.
© (2013) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
William Maignan, David Koeplinger, Gary W. Carhart, Mathieu Aubailly, Fouad Kiamilev, and J. Jiang Liu "Hardware acceleration of lucky-region fusion (LRF) algorithm for image acquisition and processing", Proc. SPIE 8720, Photonic Applications for Aerospace, Commercial, and Harsh Environments IV, 87200B (31 May 2013);

Back to Top