The problem of 3-D shape recovery from image focus can be described as the problem of determining the shape of the focused image surface (FIS)—the surface formed by the best focused points. The shape from focus (SFF) methods in the literature are fast but inaccurate because of the piecewise constant approximation of FIS. The SFF method based on FIS has shown better results by exhaustive search of FIS shape using a planar surface approximation at the cost of a considerably higher number of computations. We present a method to search FIS shape as an optimization problem, i.e., maximization of focus measure in the 3-D image volume. Each image frame in the image volume (sequence) is divided into subimage frames, and the whole image volume is divided into a number of subimage volumes. A rough depth map at only the central pixel of each subimage frame is determined using one of the traditional SFF methods. A few image frames around the image frame, whose image number in the image volume is obtained from the rough depth at the central pixel of subimage frame, are selected for the subimage volumes. The search of FIS shape is now performed in the subimage volumes using a dynamic programming optimization technique. The final depth map is obtained by collecting the depth map of the subimage volumes. The new algorithm considerably decreases the computational complexity by searching FIS shape in subimage volumes and shows better results.
In this paper, we describe a real time vehicle tracking using image processing techniques. The moving vehicles are segmented from the input image sequence using differential edge images. The vehicles are tracked using statistical invariant moments. The direction of vehicles is determined by the Hough transform. The direction of the vehicles is determined from the straight lines in the direction of the principal axes of the vehicles. The motion information is calculated from the displacement of the vehicles and the change of direction of vehicles in the consecutive frames. The algorithm is tested on different real time image sequences.
Collision avoidance is one of the most important problems in autonomous vehicles, ship navigation, and robot manipulators, etc. Image processing technique could be applied for solving the collision avoidance of moving objects. The collision could be avoided if the direction of the moving object could be accurately anticipated. The problem is how to anticipate the expected path of the moving object, so that the other moving objects in the expected path should be detected and avoided for collision avoidance. Collisions could be avoided by searching the obstacles and moving objects in the expected path, but the moving objects, which would come inside the expected path, should also be detected for fully collision avoidance. In this paper, the expected path of the moving object is determined from the previous history of the moving object using the statistical measurements.
A frequency transform-based statistical method is proposed for shape matching for MPEG-7. Shape description and its corresponding matching algorithm is one of the main concerns in MPEG-7. The normalized frequency transform is invariant to translation and scaling. The image is transformed into frequency domain using Fourier Transform. Two similar images will have same power spectrum. Annular and radial wedge distributions for the power spectra are extracted. The annular and radial wedges can be set arbitrarily. Different statistical features, such as mean and variances are found for the power spectrum of each selected transformed individual feature. The Euclidean or Minkowsky distance of the extracted features are found with respect to the shapes in the database. The minimum distance is the candidate for the matched shape. The simulation results are performed on the test shapes of MPEG-7.
Motion estimation is one of the fundamental problems in digital video processing. One of the most notable approaches of motion estimation is based on the estimation of a measure of the change of image brightness in the frame sequence commonly referred to as optical flow. The classical approaches for finding optical flow have many drawbacks. The numerical methods or least square methods for solving optical flow constrains are susceptible to errors in the cases of occlusion and of noise. Two moving objects having common border causes confliction in the velocities, and taking their averages yields a less satisfactory optical flow estimation. The wrong detection of moving boundary, as motion is usually not homogeneous and the inexact contour measurements of moving objects are the other problems of optical flow methods. Therefore, information such as color and edges along with optical flow has been used in the literature. Further, the classical methods need lot of calculations and computations for optical flow measurements. In this paper, we proposed a method, which is very fast and gives better moving information of the objects in the image sequences. The possible locations of moving objects are found first, and then we apply the Hough Transform only on the detected moving regions to find the optical flow vectors for those regions only. So we save lot of time for not finding optical flow for the still or background parts in the image sequences. The new Boolean based edge detection is applied on the two consecutive input images, and then the differential edge image of the resulting two edge maps is found. A mask for detecting the moving regions is made by dilating the differential edge image. After getting the moving regions in the image sequence with the help of the mask obtained already, we use the Hough Transform and voting accumulation methods for solving optical flow constraint equations. The voting based Hough transform avoids the errors associated with least squares techniques. Calculation of a large number of points along the constraint line is also avoided by using the transformed slope-intercept parameter domain. The simulation results show that the proposed method is very effective for extracting optical flow vectors and hence tracking moving objects in the images.