Intra prediction coding is one of the many coding-efficiency oriented tools of H.264/AVC, but it requires high computational complexity. Many fast intra coding algorithms have been proposed to reduce the computational complexity of intra prediction, but most of them have been focused on the mode decision methods themselves. In this paper, we propose a fast algorithm in which new intra modes are substituted for certain of the conventional intra modes, so that the number of intra modes can be reduced. The proposed intra modes, namely the weighted mean and median modes, can effectively represent the directions of a block in a frame. The simulation results showed that the proposed method could reduce the encoding time of the overall sequence by about 11 % and that of the I-frames by about 28%, without any noticeable degradation of the coding efficiency
Intra prediction in H.264/AVC uses 9 modes in 4×4 luma block, 4 in 16×16 luma and 8×8 chroma blocks. Intra prediction modes utilize similarity of current macroblock with pre-encoded neighboring macroblocks and directionality of image in pixel domain. Therefore its coding efficiency in H.264/AVC is improved compared to the conventional video codecs such as MPEG-2 and MPEG-4. However coding efficiency in intra-coded frame is much lower compared to that of inter slice since the intra prediction modes use only limited neighboring macro blocks and then to obtain good matched reference block image using intra prediction may be more difficult than inter prediction. Therefore for higher coding efficiency, it is very important to get good matched prediction block in a given envirionment. In general video codec which uses general 4:2:0 YCbCr format, overall coding efficiency issue is mainly focused on luma relative to chroma components. In this paper, we propose additional intra luma prediction mode using collocated chroma pixels and weight values. The proposed method utilizes collocated chroma macroblocks as reference image for more efficient intra luma prediction.
We present a method for building panoramic video for Video GIS. Video GIS(geographic information system) is a new field of mosaics application, which utilized for the automobile navigation system and the panoramic video which composed of several images taken by adjacent cameras provides sufficient information to a first trip driver. The perspective transformation, which estimated from the appropriate corresponding pairs between adjacent images, can construct the panoramic video without unwilling distortions. We use corner points for the corresponding feature, and local peak detection of corner strength estimated from morphological structures are utilized for fast and robust corner detection. The criterion method of corner strength we proposed, guarantees the robust detection of corner in any situations. For the perspective transformation, 8-parameters are estimated from perspective equations, and four pairs of matched points in the adjacent images are selected via pattern matching of corner points to construct the equations. In general, when we stitch two adjacent images together by using 8-parameter transform, some unwanted discontinuities of intensity or color will exist between their common areas and bilinear blending technique is used to construct a seamless panorama. Experiments show that our method yields good results in various conditions.