A computer vision computation requires high number of multiplications causing a bottleneck. Based on the work of Zhenhong Liu, the multiplications in these algorithms do not always require high precision provided by the processors. As a result, we can reduce computation redundancy by means of multiplication approximation. Following this approach, in this paper, we investigate two major algorithms namely convolutional neural network (CNN) and scale-invariant features transform (SIFT) to find their error tolerances due to multiplication approximation. A multiplication approximation is done by injecting a random value to each of precise multiplication value. The INRIA and OXFORD datasets were used in the SIFT algorithm analysis while the CIFAR-10 and MNIST datasets were applied for the CNN experiments. The results showed that SIFT can withstand only small percents of multiplication approximation while CNN can tolerate over 30% of multiplication approximation.
Object tracking based on image processing algorithm is used in various applications. In many object tracking methods, feature extraction is the key processing for object identification. Image pre-processing which transforms an RGB image to a binary image plays an important role. The conventional pre-processing technique applies on a whole image frame. In this paper, we propose to crop the regions of the tracked objects using their current tracking positions. Then, each cropped region is fed to the pre-processing process. Based on this approach, the interference of uninterested regions is eliminated resulting in improved pre-processed image. However, we need to perform N pre-processing processes, where N is the number of tracked object in the frame. This problem is alleviated in FPGA implementation, which is our target platform. The proposed approach is evaluated by comparing the results with conventional pre-processing method using the same tracking system.