An effective method for small and dim moving target detection in complicated background is proposed. The proposed approach takes advantage of the Non-local means filter, and applies a novel weight calculation model based on circular mask to the original background estimation pattern. By associating similarity of grayscale distribution of the images with temporal information, the extended method realize the complicated background estimation and point target extraction successfully. To compare existing target detection methods and the proposed method, signal-to-clutter ratio gain (SCRG) and background suppression factor (BSF) are employed for spatial performance comparison and receiver operating characteristics (ROC) is used for detection-performance comparison of the target trajectory. Experimental results demonstrate good performance of the proposed method in complicated scenes and low signal to noise ratio images.
Knowledge of sea clutter is of great significance in marine target detection and discrimination. In this paper, the wideband backscattering electromagnetic (EM) fields of two-dimensional (2-D) sea surfaces are numerically calculated employing the weighted curvature approximation (WCA) method. Monte Carlo trials are performed to investigate the influences of radar parameters on the statistical characteristics of the rang-resolved sea clutter. It is found that the sea clutter tends to be spikier with finer radar resolution, lower grazing angle, narrower beam width, and in the upwind direction. Meanwhile, the Pareto distribution is demonstrated to describe the statistics of the sea clutter intensities very well.
As an important branch of infrared imaging technology, infrared target tracking and detection has a very important scientific value and a wide range of applications in both military and civilian areas. For the infrared image which is characterized by low SNR and serious disturbance of background noise, an innovative and effective target detection algorithm is proposed in this paper, according to the correlation of moving target frame-to-frame and the irrelevance of noise in sequential images based on OpenCV. Firstly, since the temporal differencing and background subtraction are very complementary, we use a combined detection method of frame difference and background subtraction which is based on adaptive background updating. Results indicate that it is simple and can extract the foreground moving target from the video sequence stably. For the background updating mechanism continuously updating each pixel, we can detect the infrared moving target more accurately. It paves the way for eventually realizing real-time infrared target detection and tracking, when transplanting the algorithms on OpenCV to the DSP platform. Afterwards, we use the optimal thresholding arithmetic to segment image. It transforms the gray images to black-white images in order to provide a better condition for the image sequences detection. Finally, according to the relevance of moving objects between different frames and mathematical morphology processing, we can eliminate noise, decrease the area, and smooth region boundaries. Experimental results proves that our algorithm precisely achieve the purpose of rapid detection of small infrared target.
Autonomous driving poses unique challenges for vehicle environment perception due to the complex driving environment the autonomous vehicle finds itself in and differentiates from remote vehicles. Due to inherent uncertainty of the traffic environments and incomplete knowledge due to sensor limitation, an autonomous driving system using only local onboard sensor information is generally not sufficiently enough for conducting a reliable intelligent driving with guaranteed safety. In order to overcome limitations of the local (host) vehicle sensing system and to increase the likelihood of correct detections and classifications, collaborative information from cooperative remote vehicles could substantially facilitate effectiveness of vehicle decision making process. Dedicated Short Range Communication (DSRC) system provides a powerful inter-vehicle wireless communication channel to enhance host vehicle environment perceiving capability with the aid of transmitted information from remote vehicles.
However, there is a major challenge before one can fuse the DSRC-transmitted remote information and host vehicle Radar-observed information (in the present case): the remote DRSC data must be correctly associated with the corresponding onboard Radar data; namely, an object matching problem. Direct raw data association (i.e., measurement-to-measurement association - M2MA) is straightforward but error-prone, due to inherent uncertain nature of the observation data. The uncertainties could lead to serious difficulty in matching decision, especially, using non-stationary data. In this study, we present an object matching algorithm based on track-to-track association (T2TA) and evaluate the proposed approach with prototype vehicles in real traffic scenarios. To fully exploit potential of the DSRC system, only GPS position data from remote vehicle are used in fusion center (at host vehicle), i.e., we try to get what we need from the least amount of information; additional feature information can help the data association but are not currently considered. Comparing to M2MA, benefits of the T2TA object matching approach are: i) tracks taking into account important statistical information can provide more reliable inference results; ii) the track-formed smoothed trajectories can be used for an easier shape matching; iii) each local vehicle can design its own tracker and sends only tracks to fusion center to alleviate communication constraints. A real traffic study with different driving environments, based on a statistical hypothesis test, shows promising object matching results of significant practical implications.
The fractional calculus (FC) deals with integrals and derivatives of arbitrary (i.e., non-integer) order, and shares its origins with classical integral and differential calculus. The fractional Fourier transform (FRFT), which has been found having many applications in optics and other areas, is a generalization of the usual Fourier transform. The FC and the FRFT are two of the most interesting and useful fractional areas. In recent years, it appears many papers on the FC and FRFT, however, few of them discuss the connection of the two fractional areas. We study their relationship. The relational expression between them is deduced. The expectation of interdisciplinary cross fertilization is our motivation. For example, we can use the properties of the FC (non-locality, etc.) to solve the problem which is difficult to be solved by the FRFT in optical engineering; we can also through the physical meaning of the FRFT optical implementation to explain the physical meaning of the FC. The FC and FRFT approaches can be transposed each other in the two fractional areas. It makes that the success of the fractional methodology is unquestionable with a lot of applications, namely in nonlinear and complex system dynamics and image processing.
The particle flow filters, proposed by Daum & Hwang, provide a powerful means for density-based nonlinear filtering but their computation is intense and may be prohibitive for real-time applications. This paper proposes a design for superfast implementation of the exact particle flow filter using a field-programmable gate array (FPGA) as a parallel environment to speedup computation. Simulation results from a nonlinear filtering example are presented to demonstrate that using FPGA can dramatically accelerate particle flow filters through parallelization at the expense of a tolerable loss in accuracy as compared to nonparallel implementation.
The intensive emission of earth limb in the field of view of sensors contributes much to the observation images. Due to the low signal-to-noise ratio (SNR), it is a challenge to detect small targets in earth limb background, especially for the detection of point-like targets from a single frame. To improve the target detection, track before detection (TBD) based on the frame sequence is performed. In this paper, a new technique is proposed to determine the target associated trajectories, which jointly carries out background removing, maximum value projection (MVP) and Hough transform. The background of the bright earth limb in the observation images is removed according to the profile characteristics. For a moving target, the corresponding pixels in the MVP image are shifting approximately regularly in time sequence. And the target trajectory is determined by Hough transform according to the pixel characteristics of the target and the clutter and noise. Comparing with traditional frame-by-frame methods, determining associated trajectories from MVP reduces the computation load. Numerical simulations are presented to demonstrate the effectiveness of the approach proposed.
The moving objects detection is an essential issue in many computer vision and video processing tasks. In this paper, a detecting moving objects method using a panoramic system is proposed. It can detect ground moving objects when the camera is rotated, so it can be called the moving objects detection in rotation (MODIR). The detection area and flexible of the panoramic system are be enhanced by MODIR. The background and moving objects are moving in image when the camera is rotated. Compare with the traditional methods, the aim of MODIR is to segment the isolated entities out according to the motions in the video whether imaging platform is moving or not. Firstly, the corresponding relations between the images captured from two different views is deduced from the multi-view geometric. The moving objects and stationary background in the images are distinguished by this corresponding relations. Secondly, the moving object detection framework base on multi-frame is established. This detection framework can reduce the impacts of the image matching error and cumulative error on the moving objects detection. In the experiment, an evaluation metrics method is used to compare the performance of MODIR with the traditional methods. And a lot of videos captured by the panoramic system are processed by MODIR to demonstrate its good performance in practice.
We propose a three-view constraint for the motion object detection using moving camera. The proposed method classifies feature points in the video sequence into background or motion object by applying the epipolar constraint and a novel geometric constraint called the “Three-view Distance Constraint”. The three-view distance constraint, being the main contribution of this paper, is derived from the relative camera poses in three different views and implemented within the detection framework. Unlike the epipolar constraint, the three-view distance constraint modifies the surface degradation to the line degradation. The three-view distance constraint is capable of detecting moving objects followed by a moving camera in the same direction . We evaluate the proposed method with several video sequences to demonstrate the effectiveness and robustness of the three-view distance constraint.
Motion object tracking is one of the most important research directions in computer vision. Challenges in designing a robust tracking method are usually caused by partial or complete occlusions on targets. However, motion object tracking algorithm based on multiple cameras according to the homography relation in three views can deal with this issue effectively since the information combining from multiple cameras in different views can make the target more complete and accurate. In this paper, a robust visual tracking algorithm based on the homography relations of three cameras in different views is presented to cope with the occlusion. First of all, being the main contribution of this paper, the motion object tracking algorithm based on the low-rank matrix representation under the framework of the particle filter is applied to track the same target in the public region respectively in different views. The target model and the occlusion model are established and an alternating optimization algorithm is utilized to solve the proposed optimization formulation while tracking. Then, we confirm the plane in which the target has the largest occlusion weight to be the principal plane and calculate the homography to find out the mapping relations between different views. Finally, the images of the other two views are projected into the main plane. By making use of the homography relation between different views, the information of the occluded target can be obtained completely. The proposed algorithm has been examined throughout several challenging image sequences, and experiments show that it overcomes the failure of the motion tracking especially under the situation of the occlusion. Besides, the proposed algorithm improves the accuracy of the motion tracking comparing with other state-of-the-art algorithms.