Deep neural networks have been applied to video compressive sensing (VCS) task recently. The existing DNN-based VCS methods compress and reconstruct the scene video only in space or time dimensions, which ignores the spatial-temporal correlation of the video. And they generally utilize pixel-wise loss as the loss function, which causes the results to be over-smoothed. In this paper, we propose a perceptual spatial-temporal VCS network. The spatial-temporal VCS network, which compresses and recovers the video in both space and time dimensions, can preserve the spatial-temporal correlation of the video. Besides, we refine the perceptual loss by selecting specific feature-wise loss terms and adding a pixel-wise loss term. The refined perceptual loss can guide the spatial-temporal network to retain more textures and structures. Experimental results show the proposed method can achieve better visual effect with less recovery time than the state-of-the-art.
Dynamic Vision Sensor (DVS) is an event-based camera, which captures the changing pixel of vision. It captures the scene in the form of events. In this paper, we use a unique approach to visualize the events DVS captures with "DVS images". DVS is sensitive enough to capture objects moving in high speed, but noise is also captured. In order to improve the quality, we remove the noise of those images. Different from traditional images, the noise and objects in "DVS images" are both composed of distributed points. It is hard to use traditional methods to remove the noise. This paper proposes an efficient approach for "DVS image" noise removal. It is based on K-SVD algorithm and we improve the algorithm according to certain applications. The proposed framework can deal with "DVS images" containing different amount of noise. Experiments show that the proposed method can work well both on a fixed DVS and a moving DVS.
Small objects detection is a challenging task in computer vision due to its limited resolution and information. In order to solve this problem, the majority of existing methods sacrifice speed for improvement in accuracy. In this paper, we aim to detect small objects at a fast speed, using the best object detector Single Shot Multibox Detector (SSD) with respect to accuracy-vs-speed trade-off as base architecture. We propose a multi-level feature fusion method for introducing contextual information in SSD, in order to improve the accuracy for small objects. In detailed fusion operation, we design two feature fusion modules, concatenation module and element-sum module, different in the way of adding contextual information. Experimental results show that these two fusion modules obtain higher mAP on PASCAL VOC2007 than baseline SSD by 1.6 and 1.7 points respectively, especially with 2-3 points improvement on some small objects categories. The testing speed of them is 43 and 40 FPS respectively, superior to the state of the art Deconvolutional single shot detector (DSSD) by 29.4 and 26.4 FPS.