Proc. SPIE. 10223, Real-Time Image and Video Processing 2017
KEYWORDS: Image fusion, Visual process modeling, Detection and tracking algorithms, Cameras, Sensors, Image processing, Retina, Computing systems, Feature extraction, Video surveillance, Biomimetics, Signal processing, Video processing, Embedded systems, Systems modeling, RGB color model
Unmanned systems used for threat detection and identification are still not efficient enough for monitoring autonomously the battlefield. The limitation on size and energy makes those systems unable to use most state- of-the-art computer vision algorithms for recognition. The bio-inspired approach based on the humans peripheral and foveal visions has been reported as a way to combine recognition performance and computational efficiency. As a low resolution camera observes a large zone and detects significant changes, a second camera focuses on each event and provides a high resolution image of it. While such biomimetic existing approaches usually separate the two vision modes according to their functionality (e.g. detection, recognition) and to their basic primitives (i.e. features, algorithms), our approach uses common structures and features for both peripheral and foveal cameras, thereby decreasing the computational load with respect to the previous approaches.<p> </p> The proposed approach is demonstrated using simulated data. The outcome proves particularly attractive for real time embedded systems, as the primitives (features and classifier) have already proven good performances in low power embedded systems. This first result reveals the high potential of dual views fusion technique in the context of long duration unmanned video surveillance systems. It also encourages us to go further into miming the mechanisms of the human eye. In particular, it is expected that adding a retro-action of the fovea towards the peripheral vision will further enhance the quality and efficiency of the detection process.
Improving the surveillance capacity over wide zones requires a set of smart battery-powered Unattended Ground Sensors capable of issuing an alarm to a decision-making center. Only high-level information has to be sent when a relevant suspicious situation occurs. In this paper we propose an innovative bio-inspired approach that mimics the human bi-modal vision mechanism and the parallel processing ability of the human brain. The designed prototype exploits two levels of analysis: a low-level panoramic motion analysis, the peripheral vision, and a high-level event-focused analysis, the foveal vision. By tracking moving objects and fusing multiple criteria (size, speed, trajectory, etc.), the peripheral vision module acts as a fast relevant event detector. The foveal vision module focuses on the detected events to extract more detailed features (texture, color, shape, etc.) in order to improve the recognition efficiency. The implemented recognition core is able to acquire human knowledge and to classify in real-time a huge amount of heterogeneous data thanks to its natively parallel hardware structure. This UGS prototype validates our system approach under laboratory tests. The peripheral analysis module demonstrates a low false alarm rate whereas the foveal vision correctly focuses on the detected events. A parallel FPGA implementation of the recognition core succeeds in fulfilling the embedded application requirements. These results are paving the way of future reconfigurable virtual field agents. By locally processing the data and sending only high-level information, their energy requirements and electromagnetic signature are optimized. Moreover, the embedded Artificial Intelligence core enables these bio-inspired systems to recognize and learn new significant events. By duplicating human expertise in potentially hazardous places, our miniature visual event detector will allow early warning and contribute to better human decision making.
Pedestrian movement along critical infrastructures like pipes, railways or highways, is of major interest in surveillance applications as well as its behavior in urban environment. The goal is to anticipate illicit or dangerous human activities. For this purpose, we propose an all-in-one small autonomous system which delivers high level statistics and reports alerts in specific cases. This situational awareness project leads us to manage efficiently the scene by performing movement analysis. A dynamic background extraction algorithm is developed to reach the degree of robustness against natural and urban environment perturbations and also to match the embedded implementation constraints. When changes are detected in the scene, specific patterns are applied to detect and highlight relevant movements. Depending on the applications, specific descriptors can be extracted and fused in order to reach a high level of interpretation. In this paper, our approach is applied to two operational use cases: pedestrian urban statistics and railway surveillance. In the first case, a grid of prototypes is deployed over a city centre to collect pedestrian movement statistics up to a macroscopic level of analysis. The results demonstrate the relevance of the delivered information; in particular, the flow density map highlights pedestrian preferential paths along the streets. In the second case, one prototype is set next to high speed train tracks to secure the area. The results exhibit a low false alarm rate and assess our approach of a large sensor network for delivering a precise operational picture without overwhelming a supervisor.
The purpose of this document is to present a comparative study of five algorithms of heart sound localization, one of
which, is a method based on radial basis function networks applied in a novel approach. The advantages and
disadvantages of each method are evaluated according to a data base of 50 subjects in which there are 25 healthy
subjects selected from the University Hospital of Strasbourg (HUS) and from theMARS500 project (Moscow) and
25 subjects with cardiac pathologies selected from the HUS. This study is made under the control of an experienced
cardiologist. The performance of each method is evaluated by calculating the area under a receiver operating curve
(AUC) and the robustness is shown against different levels of additive white Gaussian noise.
Today Optronic Countermeasure (OCM) concerns imply an IR Focal-Plane Array (FPA) facing an in-band laser
irradiation. In order to evaluate the efficiency of new countermeasure concepts or the robustness of FPAs, it is necessary
to quantify the whole interaction effects. Even though some studies in the open literature show the vulnerability of
imaging systems to laser dazzling, the diversity of analysis criteria employed does not allow the results of these studies
to be correlated.
Therefore, we focus our effort on the definition of common sensor figures of merit adapted to laser OCM studies. In this
paper, two investigation levels are presented: the first one for analyzing the local nonlinear photocell response and the
second one for quantifying the whole dazzling impact on image. The first study gives interesting results on InSb photocell behaviors when irradiated by a picosecond MWIR laser. With an increasing irradiance, four different successive responses appear: from linear, logarithmic, decreasing ones to permanent linear offset response. In the second study, our quantifying tools are described and their successful implementation through the picosecond laser-dazzling characterization of an InSb FPA is assessed.
A measurement of the photoelectric parameter (contrast, pixels affected) degradation of visible Focal-Plane Arrays (FPAs) irradiated by a laser has been performed. The irradiation fluence levels applied range typically from 300 μJ/cm<sup>2</sup> to 700 mJ/cm<sup>2</sup>. A silicon FPA has been used for the visible domain. The effects of a laser irradiation in the Field Of View (FOV) and out of the FOV of the camera have been studied. It has been shown that the camera contrast decrease can reach 50% during the laser irradiation performed out of the FOV. Moreover, the effects of the Automatic Gain Control (AGC) and of the integration time on the blooming processes have been investigated. Thus, no AGC influence on the number of affected pixels has been measured, and it has been revealed that the integration time is the most sensitive parameter in the blooming action. Finally, only little laser energy is necessary for the system dazzling (1 μJ for 152 ns). A simulation of the irradiated images has been developed using a finite-difference solution. A good agreement has been shown between the experimental and simulated images. This procedure can be extended to test the blooming effects of IR cameras.
3-D optical fluorescent microscopy becomes now an efficient tool for volumic investigation of living biological samples. The 3-D data can be acquired by Optical Sectioning Microscopy which is performed by axial stepping of the object versus the objective. For any instrument, each recorded image can be described by a convolution equation between the original object and the Point Spread Function (PSF) of the acquisition system. To assess performance and ensure the data reproducibility, as for any 3-D quantitative analysis, the system indentification is mandatory. The PSF explains the properties of the image acquisition system; it can be computed or acquired experimentally. Statistical tools and Zernike moments are shown appropriate and complementary to describe a 3-D system PSF and to quantify the variation of the PSF as function of the optical parameters. Some critical experimental parameters can be identified with these tools. This is helpful for biologist to define an aquisition protocol optimizing the use of the system. Reduction of out-of-focus light is the task of 3-D microscopy; it is carried out computationally by deconvolution process. Pre-filtering the images improves the stability of deconvolution results, now less dependent on the regularization parameter; this helps the biologists to use restoration process.