This paper describes a new shape matching using image projection features. The corners are located in the binary image using the radius vector function. The imaginary line joining any two corners is called the baseline, if the distance between those two corners is the maximum. The image is rotated to align the baseline with the reference axis. Horizontal and vertical projections of the rotated image are drawn. The projections are matched with the projections of the database images using the sorted normalized matching algorithm. The algorithm is tested on various test images.
In this paper, we describe a real time vehicle tracking using image processing techniques. The moving vehicles are segmented from the input image sequence using differential edge images. The vehicles are tracked using statistical invariant moments. The direction of vehicles is determined by the Hough transform. The direction of the vehicles is determined from the straight lines in the direction of the principal axes of the vehicles. The motion information is calculated from the displacement of the vehicles and the change of direction of vehicles in the consecutive frames. The algorithm is tested on different real time image sequences.
Collision avoidance is one of the most important problems in autonomous vehicles, ship navigation, and robot manipulators, etc. Image processing technique could be applied for solving the collision avoidance of moving objects. The collision could be avoided if the direction of the moving object could be accurately anticipated. The problem is how to anticipate the expected path of the moving object, so that the other moving objects in the expected path should be detected and avoided for collision avoidance. Collisions could be avoided by searching the obstacles and moving objects in the expected path, but the moving objects, which would come inside the expected path, should also be detected for fully collision avoidance. In this paper, the expected path of the moving object is determined from the previous history of the moving object using the statistical measurements.
Shape descriptions based on the traditional chain codes are very susceptible to small perturbations in the contours of the objects. Therefore, direct matching of traditional chain codes could not be used for image retrieval based on the shape boundaries from the large databases. In this paper, a histogram based chain codes are proposed which could be used for image retrieval. The modified chain codes matching are invariant to translation, rotation and scaling transformations, and have high immunity to noise and small perturbations.
Shape is one of the salient features of visual content and can be used in visual information retrieval. Radius vector function for star-shaped objects is available. A modified radius-vector function for both types of shapes-star shaped, and non-star shaped is presented. The center of gravity is selected as the reference point, while the principal of axes is selected as the reference line. The corner points are marked as the nodes. Normalized distances are stored from the reference point to the corner nodes. The Euclidean distance of the center to the corner distances of test object with those of objects in the database is used for shape matching.
In this paper, we proposed a method, which is very fast and gives better moving information of the objects in the image sequences. The possible locations of moving objects are found first, and then we apply the Hough Transform only on the detected moving regions to find the optical flow vectors for those regions only. So we save lot of time for not finding optical flow for the still or background parts in the image sequences. The new Boolean based edge detection is applied on the two consecutive input images, and then the differential edge image of the resulting two edge maps is found. A mask for detecting the moving regions is made by dilating the differential edge image. After getting the moving regions in the image sequence with the help of the mask obtained already, we use the Hough Transform and voting accumulation methods for solving optical flow constraint equations. The voting based Hough transform avoids the errors associated with least squares techniques. Calculation of a large number of points along the constraint line is also avoided by using the transformed slope-intercept parameter domain. The simulation results show that the proposed method is very effective for extracting optical flow vectors and hence tracking moving objects in the images.
A frequency transform-based statistical method is proposed for shape matching for MPEG-7. Shape description and its corresponding matching algorithm is one of the main concerns in MPEG-7. The normalized frequency transform is invariant to translation and scaling. The image is transformed into frequency domain using Fourier Transform. Two similar images will have same power spectrum. Annular and radial wedge distributions for the power spectra are extracted. The annular and radial wedges can be set arbitrarily. Different statistical features, such as mean and variances are found for the power spectrum of each selected transformed individual feature. The Euclidean or Minkowsky distance of the extracted features are found with respect to the shapes in the database. The minimum distance is the candidate for the matched shape. The simulation results are performed on the test shapes of MPEG-7.
There are a lot of home network technologies. One of them is IEEE 1394 technology that supports automatic configuration, QoS guaranteed real-time A/V transmission and high bandwidth. In the near future, IEEE 1394 home network will be deployed and IEEE 1394 digital home appliances will be universally available at a reasonable price for customers. Then, one needs an integrated control technology. This kind of control technology is based on IEC 61883-1 FCP and AV/C CTS. This paper issues the IEEE 1394-based technology to control digital home appliances and the test results from its implementation show the possibility to be used in a home gateway as a simple but effective control technology.
Motion estimation is one of the fundamental problems in digital video processing. One of the most notable approaches of motion estimation is based on the estimation of a measure of the change of image brightness in the frame sequence commonly referred to as optical flow. The classical approaches for finding optical flow have many drawbacks. The numerical methods or least square methods for solving optical flow constrains are susceptible to errors in the cases of occlusion and of noise. Two moving objects having common border causes confliction in the velocities, and taking their averages yields a less satisfactory optical flow estimation. The wrong detection of moving boundary, as motion is usually not homogeneous and the inexact contour measurements of moving objects are the other problems of optical flow methods. Therefore, information such as color and edges along with optical flow has been used in the literature. Further, the classical methods need lot of calculations and computations for optical flow measurements. In this paper, we proposed a method, which is very fast and gives better moving information of the objects in the image sequences. The possible locations of moving objects are found first, and then we apply the Hough Transform only on the detected moving regions to find the optical flow vectors for those regions only. So we save lot of time for not finding optical flow for the still or background parts in the image sequences. The new Boolean based edge detection is applied on the two consecutive input images, and then the differential edge image of the resulting two edge maps is found. A mask for detecting the moving regions is made by dilating the differential edge image. After getting the moving regions in the image sequence with the help of the mask obtained already, we use the Hough Transform and voting accumulation methods for solving optical flow constraint equations. The voting based Hough transform avoids the errors associated with least squares techniques. Calculation of a large number of points along the constraint line is also avoided by using the transformed slope-intercept parameter domain. The simulation results show that the proposed method is very effective for extracting optical flow vectors and hence tracking moving objects in the images.
Tracking of moving objects is one of the application techniques with complex processing for understanding input images. In this paper, we have considered optical flow which is one of moving object tracking algorithms. We proposed a new method using the Combinatorial Hough Transform (CHT) and Voting Accumulation in order to find optimal constraint lines. Also, we used the logical operation in order to release the operation time. The proposed method can extract the optical flow of the moving object. Then, the moving information was computed using the extracted optical flow. We have simulated the proposed method using test images including the noise.
In this paper, we suggest how to segment the face when there is the man under complex environment, extracts the features from segmented the image and proposes a effective recognition system using the discrete wavelet transform (DWT). This algorithm is proposed by following processes. First, two gray-level images is captured with 256 level of the size of 256 X 256 in constant illumination. We use a Gaussian filter to remove noise of input image and get a differential image between background and input image. Second, a mask is made from erosion and dilation process after binary of the differential image. Third, facial image is divided by projecting the mask into input image. Most characteristic information of human face is in eyebrow, eyes, nose and mouth. In the reason, the facial characteristics are detected after selecting local area including this area. Fourth, detecting the characteristic of segmented face image, edge is detected with Sobel operator. Then, eye area and the center of face are searched by using horizontal and vertical components of edge. Characteristic area consists of eyes, a nose, a mouth, eye brows and cheeks, is detected by searching the edge of the image. Finally, characteristic vectors are extracted from performing DWT of this characteristic area and are normalized it between +1 and -1. Normalized vectors is used with input vector of neural network. Simulation results show recognition rate of 100% about learned image and 92% about test image.
A common concern of neural network models has been the problem of relating the function of complex systems of neurons to what is known of individual neurons, their interconnections and offsets. In this paper, we propose a new model of neural networks that can control and produce the offset patterns of the input layer, the hidden layer, and the output layer neurons. It consists of the input layer for the signal patterns, the hidden layer for the offset patterns production, and memory part between the hidden layer and the output layer. The output of neurons is calculated by the offsets control parameter Rofj. The input layer calculates the input patterns to be learned so that the proposed neural network can control and produce the offset patterns, and sends the results to the next layer. The hidden layer produces the offset patterns after receiving the pattern information from the input layer, and it sends the output information of the hidden layer to the memory part. The memory part stores the learned output patterns of the hidden layer after comparing it with the input pattern, and sends the stored information to the output layer after the entire learning. Simulation results show that the proposed neural network can produce the offset patterns and it can be efficiently applied in the logic circuit design and pattern classification.
In stereo vision, depth information is one of the important parameters to understand the real world. One method for extracting such depth information is based on the geometry of stereo vision using two cameras displaced from each other by a baseline distance. In this paper, we show an improved triangulation method based on stereo vision angles. We setup a stereo vision system which extracts the distance to the object by detecting moving objects using difference image and by obtaining depth information using the improved triangulation method It has been implemented employing a TMS320C30 DSP board in a stereo vision system. As a result of experiment, the proposed vision system has the accuracy of 0.2mm in the range of 400mm.