Vehicle detection is an important topic for advanced driver-assistance systems. This paper proposes an adaptive approach for an embedded system by focusing on monocular vehicle detection in real time, also aiming at being accurate under challenging conditions. Scene classification is accomplished by using a simplified convolution neural network with hypothesis generation by SoftMax regression. The output is consequently taken into account to optimize detection parameters for hypothesis generation and testing. Thus, we offer a sample-reorganization mechanism to improve the performance of vehicle hypothesis verification. A hypothesis leap mechanism is in use to improve the operating efficiency of the on-board system. A practical on-road test is employed to verify vehicle detection (i.e., accuracy) and also the performance of the designed on-board system regarding speed.
List fusion is a critical problem in information retrieval. The approach using uniform weights for list fusion ignores the correctness, importance and individuality of various detectors for a concrete application. In this paper, we propose a nonuniform and rational optimized paradigm for TRECVid list fusion, which is expected to loyally preserve the precision in the outcomes and reach the maximum Average Precision (A.P.). Therefore we exhaustively search for the corresponding parametric set for the best A.P. in the space spanned by the feature vectors. In order to accelerate the fusion procedure of the input score lists, we train our model using the training data set, and apply the learnt parameters to fuse those new vectors. We take the nonuniform rational blending functions into account, the advantage of using this fusion is that the problem of weights selection is converted to the issue of parameters selection in the space related to the nonuniform and rational functions. The high precision and multiple resolution, controllable and stable attributes of rational functions are helpful in parameters selection. Therefore, the space for fusion weights selection becomes large. The correctness of our proposal is compared and verified with the average and linear fusion results.
Visual cryptography is a powerful technique that combines the notions of perfect ciphers and secret sharing in cryptography with that of raster graphics. A binary image can be divided into shares that can be stacked together to approximately recover the original image. Unfortunately, it has not been used much primarily because the decryption process entails a severe degradation in image quality in terms of loss of resolution and contrast. Its usage is also hampered by the lack of proper techniques for handling gray-scale and color images. We develop a novel technique that enables visual cryptography of color as well as gray-scale images. With the use of halftoning and a novel microblock encoding scheme, the technique has a unique flexibility that enables a single encryption of a color image but enables three types of decryptions on the same ciphertext. The three different types of decryptions enable the recovery of the image of varying qualities. The physical transparency stacking type of decryption enables the recovery of the traditional visual cryptography quality image. An enhanced stacking technique enables the decryption into a halftone quality image. Finally, a computation-based decryption scheme makes the perfect recovery of the original image possible. Based on this basic scheme, we establish a progressive mechanism to share color images at multiple resolutions. We extract shares from each resolution layer to construct a hierarchical structure; the images of different resolutions can then be restored by stacking the different shared images together. Thus, our technique enables flexible decryption. We implement our technique and present results.