Spectral and Polarization Imaging (SPI) is an emerging sensing method that combines the acquisition of both spectral and polarization information of a scene. It could benefit for various applications like appearance characterization from measurement, reflectance property estimation, diffuse/specular component separation, material classification, etc. In this paper, we present a review of recent SPI systems from the literature. We propose a description of the existing SPI systems in terms of technology employed, imaging conditions, and targeted application.
A polarization filter array (PFA) camera is an imaging device capable of analyzing the polarization state of light in a snapshot manner. These cameras exhibit spatial variations, i.e., nonuniformity, in their response due to optical imperfections introduced during the nanofabrication process. Calibration is done by computational imaging algorithms to correct the data for radiometric and polarimetric errors. We reviewed existing calibration methods and applied them using a practical optical acquisition setup and a commercially available PFA camera. The goal of the evaluation is first to compare which algorithm performs better with regard to polarization error and then to investigate both the influence of the dynamic range and number of polarization angle stimuli of the training data. To our knowledge, this has not been done in previous work.
A Polarization Filter Array (PFA) camera is an imaging device capable of analyzing the polarization state of light in a snapshot way. These cameras exhibit spatial variations, i.e. nonuniformity, in their response due to optical imperfections introduced during the nanofabrication process. Calibration is done by computational imaging algorithms to correct the data for radiometric and polarimetric errors. In this paper, we review existing calibration procedures, and show a practical implementation result of one of these methods applied to a commercially available PFA camera.
Multi-band polarization imaging, by mean of analyzing spectral and polarimetric data simultaneously, is a good way to improve the quantity and quality of information recovered from a scene. Therefore, it can enhance computer vision algorithms as it permits to recover more statistical information about a surface than color imaging. This work presents a database of polarimetric and multispectral images that combine visible and near-infrared (NIR) information. An experimental setup is built around a dual-sensor camera. Multispectral images are reconstructed from the dual-RGB method. The polarimetric feature is achieved using rotating linear polarization filters in front of the camera at four different angles (0, 45, 90 and 135 degrees). The resulting imaging system outputs 6 spectral/polarimetric channels. We demonstrate 10 different scenes composed of several materials like color checker, high reflecting metallic object, plastic, painting, liquid, fabric and food. Our database of images is provided online as supplementary material for further simulation and data analysis. This work also discusses several issues about the multi-band imaging technique described.
Proc. SPIE. 9897, Real-Time Image and Video Processing 2016
KEYWORDS: Human-machine interfaces, High dynamic range image sensors, Detection and tracking algorithms, Cameras, Sensors, Image processing, Video, Image resolution, Digital cameras, Digital cameras, Field programmable gate arrays, Image sensors, Range imaging, High dynamic range imaging, High dynamic range imaging, Raster graphics
High dynamic range (HDR) imaging generation from a set of low dynamic range images taken in different exposure times is a low cost and an easy technique. This technique provides a good result for static scenes. Temporal exposure bracketing cannot be applied directly for dynamic scenes, since camera or object motion in bracketed exposures creates ghosts in the resulting HDR image. In this paper we describe a real-time ghost removing hardware implementation on high dynamic range video ow added for our HDR FPGA based smart camera which is able to provide full resolution (1280 x 1024) HDR video stream at 60 fps. We present experimental results to show the efficiency of our implemented method in ghost removing.
Proc. SPIE. 9534, Twelfth International Conference on Quality Control by Artificial Vision 2015
KEYWORDS: Image compression, Video acceleration, High dynamic range image sensors, Imaging systems, Cameras, Sensors, Video, Field programmable gate arrays, Image sensors, High dynamic range imaging
Standard cameras capture only a fraction of the information that is visible to the human visual system. This is specifically true for natural scenes including areas of low and high illumination due to transitions between sunlit and shaded areas. When capturing such a scene, many cameras are unable to store the full Dynamic Range (DR) resulting in low quality video where details are concealed in shadows or washed out by sunlight. The imaging technique that can overcome this problem is called HDR (High Dynamic Range) imaging. This paper describes a complete smart camera built around a standard off-the-shelf LDR (Low Dynamic Range) sensor and a Virtex-6 FPGA board. This smart camera called HDR-ARtiSt (High Dynamic Range Adaptive Real-time Smart camera) is able to produce a real-time HDR live video color stream by recording and combining multiple acquisitions of the same scene while varying the exposure time. This technique appears as one of the most appropriate and cheapest solution to enhance the dynamic range of real-life environments. HDR-ARtiSt embeds real-time multiple captures, HDR processing, data display and transfer of a HDR color video for a full sensor resolution (1280 1024 pixels) at 60 frames per second. The main contributions of this work are: (1) Multiple Exposure Control (MEC) dedicated to the smart image capture with alternating three exposure times that are dynamically evaluated from frame to frame, (2) Multi-streaming Memory Management Unit (MMMU) dedicated to the memory read/write operations of the three parallel video streams, corresponding to the different exposure times, (3) HRD creating by combining the video streams using a specific hardware version of the Devebecs technique, and (4) Global Tone Mapping (GTM) of the HDR scene for display on a standard LCD monitor.
KEYWORDS: High dynamic range imaging, Video, Cameras, Sensors, Video surveillance, Image quality, Video acceleration, Image sensors, Field programmable gate arrays, Time multiplexed optical shutter
In many applications such as video surveillance or defect detection, the perception of information related to a scene is limited in areas with strong contrasts. The high dynamic range (HDR) capture technique can deal with these limitations. The proposed method has the advantage of automatically selecting multiple exposure times to make outputs more visible than fixed exposure ones. A real-time hardware implementation of the HDR technique that shows more details both in dark and bright areas of a scene is an important line of research. For this purpose, we built a dedicated smart camera that performs both capturing and HDR video processing from three exposures. What is new in our work is shown through the following points: HDR video capture through multiple exposure control, HDR memory management, HDR frame generation, and representation under a hardware context. Our camera achieves a real-time HDR video output at 60 fps at 1.3 megapixels and demonstrates the efficiency of our technique through an experimental result. Applications of this HDR smart camera include the movie industry, the mass-consumer market, military, automotive industry, and surveillance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.