Despite the evolution of technologies, high-quality image acquisition systems design is a complex challenge. Indeed, during the image acquisition process, the recorded image does not fully represent the real visual scene. The recorded information could be partial due to dynamic range limitation and degraded due to distorsions of the acquisition system. Typically, these issues have several origins such as lens blur, or limited resolution of the image sensor. In this paper, we propose a full image enhancement system that includes lens blur correction based on a non-blind deconvolution followed by a spatial resolution enhancement based on a Super-Resolution technique. The lens correction has been software designed whereas the Super-Resolution has been both software and hardware (on an FPGA) implemented. The two processing steps have been validated using well-known image quality metrics, highlighting improvements of the quality of the resulting images.
Proc. SPIE. 9897, Real-Time Image and Video Processing 2016
KEYWORDS: Human-machine interfaces, High dynamic range image sensors, Detection and tracking algorithms, Cameras, Sensors, Image processing, Video, Image resolution, Digital cameras, Digital cameras, Field programmable gate arrays, Image sensors, Range imaging, High dynamic range imaging, High dynamic range imaging, Raster graphics
High dynamic range (HDR) imaging generation from a set of low dynamic range images taken in different exposure times is a low cost and an easy technique. This technique provides a good result for static scenes. Temporal exposure bracketing cannot be applied directly for dynamic scenes, since camera or object motion in bracketed exposures creates ghosts in the resulting HDR image. In this paper we describe a real-time ghost removing hardware implementation on high dynamic range video ow added for our HDR FPGA based smart camera which is able to provide full resolution (1280 x 1024) HDR video stream at 60 fps. We present experimental results to show the efficiency of our implemented method in ghost removing.
Standard cameras capture only a fraction of the information that is visible to the human visual system. This is specifically true for natural scenes including areas of low and high illumination due to transitions between sunlit and shaded areas. When capturing such a scene, many cameras are unable to store the full Dynamic Range (DR) resulting in low quality video where details are concealed in shadows or washed out by sunlight. The imaging technique that can overcome this problem is called HDR (High Dynamic Range) imaging. This paper describes a complete smart camera built around a standard off-the-shelf LDR (Low Dynamic Range) sensor and a Virtex-6 FPGA board. This smart camera called HDR-ARtiSt (High Dynamic Range Adaptive Real-time Smart camera) is able to produce a real-time HDR live video color stream by recording and combining multiple acquisitions of the same scene while varying the exposure time. This technique appears as one of the most appropriate and cheapest solution to enhance the dynamic range of real-life environments. HDR-ARtiSt embeds real-time multiple captures, HDR processing, data display and transfer of a HDR color video for a full sensor resolution (1280 1024 pixels) at 60 frames per second. The main contributions of this work are: (1) Multiple Exposure Control (MEC) dedicated to the smart image capture with alternating three exposure times that are dynamically evaluated from frame to frame, (2) Multi-streaming Memory Management Unit (MMMU) dedicated to the memory read/write operations of the three parallel video streams, corresponding to the different exposure times, (3) HRD creating by combining the video streams using a specific hardware version of the Devebecs technique, and (4) Global Tone Mapping (GTM) of the HDR scene for display on a standard LCD monitor.
In many applications such as video surveillance or defect detection, the perception of information related to a scene is limited in areas with strong contrasts. The high dynamic range (HDR) capture technique can deal with these limitations. The proposed method has the advantage of automatically selecting multiple exposure times to make outputs more visible than fixed exposure ones. A real-time hardware implementation of the HDR technique that shows more details both in dark and bright areas of a scene is an important line of research. For this purpose, we built a dedicated smart camera that performs both capturing and HDR video processing from three exposures. What is new in our work is shown through the following points: HDR video capture through multiple exposure control, HDR memory management, HDR frame generation, and representation under a hardware context. Our camera achieves a real-time HDR video output at 60 fps at 1.3 megapixels and demonstrates the efficiency of our technique through an experimental result. Applications of this HDR smart camera include the movie industry, the mass-consumer market, military, automotive industry, and surveillance.
The advent of camera phones marks a new phase in embedded camera sales. By late 2009, the total number of camera
phones will exceed that of both conventional and digital cameras shipped since the invention of photography. Use in mobile
phones of applications like visiophony, matrix code readers and biometrics requires a high degree of component flexibility
that image processors (IPs) have not, to date, been able to provide. For all these reasons, programmable processor solutions
have become essential. This paper presents several techniques geared to speeding up image processors. It demonstrates
that a gain of twice is possible for the complete image acquisition chain and the enhancement pipeline downstream of the
video sensor. Such results confirm the potential of these computing systems for supporting future applications.
Computer assisted-vision plays a greater and greater part in our society, in various fields such as people and goods safety, industrial production, telecommunications, robotic... However, technical developments are still timid and slowed down by various factors linked to the sensors cost, to the systems lack of flexibility, to the difficulty of developing rapidly complex and robust applications, and to the lack of interaction among these systems themselves, or with their environment. This paper describes the ICAM(Intellignent CAMera) project, a smart camera with real-time video processing capabilities. This camera associates a sensor with massively parallel outputs and a SIMD processors network to achieve very high speed processing. The paper presents the first modelisation of this device and first results.