In numerous computer vision applications, enhancing the quality and resolution of captured video can be
critical. Acquired video is often grainy and low quality due to motion, transmission bottlenecks, etc.
Postprocessing can enhance it. Superresolution greatly decreases camera jitter to deliver a smooth,
stabilized, high quality video. In this paper, we extend previous work on a real-time superresolution
application implemented in ASIC/FPGA hardware. A gradient based technique is used to register the
frames at the sub-pixel level. Once we get the high resolution grid, we use an improved regularization
technique in which the image is iteratively modified by applying back-projection to get a sharp and
undistorted image. The algorithm was first tested in software and migrated to hardware, to achieve 320x240
-> 1280x960, about 30 fps, a stunning superresolution by 16X in total pixels. Various input parameters,
such as size of input image, enlarging factor and the number of nearest neighbors, can be tuned
conveniently by the user. We use a maximum word size of 32 bits to implement the algorithm in Matlab
Simulink as well as in FPGA hardware, which gives us a fine balance between the number of bits and
performance. The proposed system is robust and highly efficient. We have shown the performance
improvement of the hardware superresolution over the software version (C code).