There are various saliency detection methods have been proposed recent years. These methods can often complement each other so combining them in appropriate way will be an effective solution of saliency analysis. Existing aggregation methods assigned weights to each entire saliency map, ignoring that features perform differently in certain parts of an image and their gaps between distinguishing the foreground from the backgrounds. In this work, we present a Bayesian probability based framework for multi-feature aggregation. We address saliency detection as a two-class classification problem. Saliency maps generated from each feature have been decomposed into pixels. By the statistic results of different saliency value’s reliability on foreground and background detection, we can generate an accurate, uniform and per-pixel saliency mask without any manual set parameters. This approach can significantly suppress feature’s misclassification while preserve their sensitivity on foreground or background. Experiment on public saliency benchmarks show that our method achieves equal or better results than all state-of-the-art approaches. A new dataset contains 1500 images with human labeled ground truth is also constructed.
This paper proposes an automatic targeting locating system based on dual-field imaging to improve the stability of light weapons. The system consists of a wide field of view (WFOV) camera and a narrow field of view (NFOV) camera. The WFOV camera searches the pedestrian in the scenery, the other camera tracks the pedestrian and aims it accurately. Video signal is send to the processing unit PC and control signal is send back to the imaging system. This automatic target tracking algorithm is integrated by Adaboost and Median-Flow algorithm. It is used to track the pedestrians and locate the head of the target. Experiment results show that the dual-field imaging system and proposed algorithm has robust target tracking performance.