This work presents a new global motion estimation algorithm for MPEG compressed video sequences. It makes use of a feature extraction technique based on the Generalized Hough Transform, which is able to provide rotation, scale and displacement parameters when comparing two different frames from a video sequence. Thus, pan, tilt, swing (rotation along z-axis) and zoom effects will be studied using the proposed algorithm.
DC coefficients of the DCT transform are extracted from the MPEG stream and used to create DC images, which are the starting point for the global motion estimation algorithm. Further application of the feature extraction technique to DC images will allow to perform motion estimation, reducing processing time as the decompression process is avoided.
Pseudocode and details about the implementation of the algorithm as well as statistics to illustrate the efficiency of the algorithm are provided.
Abrupt video edition effects (mainly called scene cuts) are transitions in which the image changes globally from one frame to the following one. The automatic detection of this set of effects is a research area widely analyzed, which has produced a number of working technique families. Two of these families are the methods based on geometric features of the images (basically, edges and contours) and the ones based on intensity or chromaticity of the images.
Feature-based techniques are highly reliable for cut detection, but they are sometimes unfeasible due to the time spent to obtain results. Intensity-based techniques are not so accurate to detect cuts, although they work very fast. Thus a new hybrid technique is proposed. This method has been stated by combining both techniques.
The intensity-based method is used to locate 'probable cuts' with a considerable speed and then the edge-based method is applied to check whether each 'probable cut' actually exists or not. This way, a great time reduction is achieved with little or no loss of accuracy.
Results of five well-known algorithms for automatic cut detection are shown. Besides, the results of the proposed method are given and all of them are compared, obtaining the new technique very good results.
This work presents a new video feature extraction technique based on the Generalized Hough Transform (GHT). This technique provides a way to define a similarity measure between two different frames, which establishes the basis for scene cut detection algorithms. Moreover, GHT allows to calculate the differences between two frames in terms of rotation, scale and displacement. This provides a framework for the development of global motion estimation algorithms. In addition, gradual transition detection algorithms (fades, dissolves, etc.) can also be developed. To illustrate the posibilities of this technique, a scene cut detection algorithm is also proposed. This algorithm works with MPEG video in compressed domain, achieving real time processing. An improved thresholding technique is also stated. This technique uses two different sets of similarity values making the scene cut detection algorithm perform quite well with different types of videos. The thresholding process reports two different kinds of cuts: real cuts and probable cuts. Also, it detects the location of dynamic scenes, which can be used to perform further semantic analysis. Finally, the use of the improved thresholding technique and a set of optimized parameters results in an algorithm where no human intervention is needed. Several tests have been carried out using long videos, including more than 1400 cuts. Comparison with another well-known cut detection algorithm has also been performed.