24 June 2014 Hardware acceleration of lucky-region fusion (LRF) algorithm for imaging
Author Affiliations +
Abstract
“Lucky-region” fusion (LRF) is a synthetic imaging technique that has proven successful in enhancing the quality of images distorted by atmospheric turbulence. The LRF algorithm extracts sharp regions of an image obtained from a series of short exposure frames, and fuses the sharp regions into a final, improved image. In our previous research, the LRF algorithm had been implemented on a PC using the C programming language. However, the PC did not have sufficient processing power to handle real-time extraction, processing and reduction required when the LRF algorithm was applied to real-time video from fast, high-resolution image sensors rather than single picture images. This document describes a hardware implementation of the LRF algorithm on a VIRTEX-7 field programmable gate array (FPGA) to achieve real-time image processing. The novelty in our approach is the creation of a “black box” LRF video processing system with a general camera link input, a user controller interface, and a camera link or DVI video output. We also describe a custom hardware simulation environment we have built to test our LRF implementation.
© (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Christopher R. Jackson, Christopher R. Jackson, Garrett A. Ejzak, Garrett A. Ejzak, Mathieu Aubailly, Mathieu Aubailly, Gary W. Carhart, Gary W. Carhart, J. Jiang Liu, J. Jiang Liu, Fouad Kiamilev, Fouad Kiamilev, } "Hardware acceleration of lucky-region fusion (LRF) algorithm for imaging", Proc. SPIE 9070, Infrared Technology and Applications XL, 90703C (24 June 2014); doi: 10.1117/12.2053898; https://doi.org/10.1117/12.2053898
PROCEEDINGS
11 PAGES


SHARE
Back to Top