Efficient moving object tracking requires near flawless detection results to establish correct correspondences between
frames. This is especially true in the defense sector where accuracy and speed are critical factors of success. However,
problems such as camera motion, lighting and weather changes, texture variation and inter-object occlusions result in
misdetections or false positive detections which in turn, lead to broken tracks. In this paper, we propose to use
background subtraction and an optimized version of Horn & Schunk's optical flow algorithm in order to boost detection
response. We use the frame differencing method, followed by morphological operations to show that it works in many
scenarios and the optimized optical flow technique serves to complement the detector results. The Horn & Schunk's
method yields color-coded motion vectors for each frame pixel. To segment the moving regions in the frame, we apply
color thresholding to distinguish the blobs. Next, we extract appearance-based features from the detected object and
establish the correspondences between objects' features, in our case, the object's centroid. We have used the Euclidean
distance measure to compute the minimum distances between the centroids. The centroids are matched by using
Hungarian algorithm, thus obtaining point correspondences. The Hungarian algorithm's output matrix dictates the
objects' associations with each other. We have tested the algorithm to detect people in corridor, mall and field sequences
and our early results with an accuracy of 86.4% indicate that this system has the ability to detect and track objects in
video sequences robustly.
Position Estimation of target has always remained a critical task in defense applications. A variety of techniques exists for evaluation of position of objects over time. This paper discusses the idea of view morphing to generate future images of scenes having moving objects. It is highlighted that the concepts of view interpolation may be extended to synthesize new views that are NOT present between the given views with reference to time and/or position. This problem is addressed using View Extrapolation. It is based on the assumption that present development of non-stationary objects will continue in the same direction and with unvarying speed. The problem has been solved by dividing it into the three steps of Prewarping, View Extrapolation and Postwarping. It is pointed out that due consideration be given to the time passed between the capture of original views and time at which the new view is to be generated. This will help in finding the motion related parameters of the scene. This paper outlines an algorithm and highlights the issues to be considered to generate future images using the information in existing ones.