AVC/H.264 supports the use of multiple reference frames (e.g., 5 frames in AVC/H.264) for motion estimation (ME), which demands a huge computational complexity in ME. We propose an adaptive search range adjustment scheme to reduce the computational complexity of ME by reducing the search range of each reference frame-from the (t-1)'th frame to the (t-5)'th frame-for each macroblock. Based on the statistical analysis that the 16×16 mode type is dominantly selected rather than the other block partition mode types, the proposed method reduces the search range of the remaining ME process in the given reference frame according to the motion vector (MV) position of the 16×16 block ME. In the case of the (t-1)'th frame, the MV position of the 8×8 block ME-in addition to that of 16×16 block ME-is also used for the search range reduction to sub-block partition mode types of the 8×8 block. The experimental results show that the proposed method reduces about 50% and 65% of the total encoding time over CIF/SIF and full HD test sequences, respectively, without any noticeable visual degradation, compared to the full search method of the AVC/H.264 encoder.
The best performance in compression efficiency is provided by MPEG-4 AVC/H.264 but at the cost of high computational complexity to select the best mode during the mode decision process. To reduce this complexity, we propose a fast mode decision algorithm that exploits the spatiotemporal correlation of the macroblock (MB) modes and the rate-distortion (RD) costs from the previous and current frames. The proposed method determines the candidate modes for the current MB based on the modes of the four spatially neighboring and temporally collocated MBs. It also incorporates the RD costs of the four reference MBs and the current MB. The experimental results show that the AVC/H.264 encoder using our proposed method achieves about 73% speedup (up to 86.5%) compared to the AVC/H.264 JM 11.0 encoder while preserving a 0.25% increment in total encoding bits with a loss of only 0.09 dB in peak SNR.
Step-motor, piezo, liquid lens and voice coil motor (VCM) have been thought as good candidates for actuators in auto-focusing
compact camera module (CCM). Currently, VCMs take possession of big place in auto-focusing CCM market.
However, VCMs have limitations in developing thin, low-power CCMs. Therefore, ionic polymer-metal composites
(IPMCs) could be thought as one of the best candidates in developing auto-focusing CCM due to their well-known
characteristics such as low-power consumption and large displacement. It is required that fast bending response (20
μm/20 ms) and large blocking force (800 mgf) should be achieved for the practical applications of IPMCs in developing
auto-focusing CCM. Here, we present the method for increasing IPMC's bending response and displacement by
anisotropic plasma treatment. Furthermore, we demonstrate the fabrication of a prototype of CCM actuated by IPMC
and its remarkable low power consumption.
In MPEG-4, 3D mesh coding (3DMC) achieves 40:1 to 50:1 compression ratio over 3-D meshes (in VRML IndexedFaceSet representation) without noticeable visual degradation. This substantial gain comes not for free: it changes the vertex and face permutation order of the original 3-D mesh model. This vertex and face permutation order change may cause a serious problem for animation, editing operation, and special effects, where the original permutation order information is critical not only to the mesh representation, but also to the related tools. To fix this problem, we need to transmit the vertex and face permutation order information additionally. This additional transmission causes the unexpected increase of the bitstream size. In this paper, we proposed a novel vertex and face permutation order compression algorithm to address the vertex and face permutation order change by the 3DMC encoding with the minimal increase of side information. Our proposed vertex and face permutation order coding method is based on the adaptive probability model, which makes allocating one fewer bits codeword to each vertex and face permutation order in every distinguishable unit as encoding proceeds. Additionally to the adaptive probability model, we further increased the coding efficiency of the proposed method by representing and encoding each vertex and face permutation order per connected component (CC). Simulation results demonstrated that the proposed algorithm can encode the vertex and face permutation order losslessly while making up to 12% bit-saving compared with the logarithmic representation based on the fixed probability model.