23 June 2003 Lossless coding using predictors and VLCs optimized for each image
Author Affiliations +
Proceedings Volume 5150, Visual Communications and Image Processing 2003; (2003) https://doi.org/10.1117/12.502843
Event: Visual Communications and Image Processing 2003, 2003, Lugano, Switzerland
Abstract
This paper proposes an efficient lossless coding scheme for still images. The scheme utilizes an adaptive prediction technique where a set of linear predictors are designed for a given image and an appropriate predictor is selected from the set block-by-block. The resulting prediction errors are encoded using context-adaptive variable-length codes (VLCs). Context modeling, or adaptive selection of VLCs, is carried out pel-by-pel and the VLC assigned to each context is designed on a probability distribution model of the prediction errors. In order to improve coding efficiency, a generalized Gaussian function is used as the model for each context. Moreover, not only the predictors but also parameters of the probability distribution models are iteratively optimized for each image so that a coding rate of the prediction errors can have a minimum. Experimental results show that the proposed coding scheme attains comparable coding performance to the state-of-the-art TMW scheme with much lower complexity in the decoding process.
© (2003) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Ichiro Matsuda, Ichiro Matsuda, Noriyuki Shirai, Noriyuki Shirai, Susumu Itoh, Susumu Itoh, "Lossless coding using predictors and VLCs optimized for each image", Proc. SPIE 5150, Visual Communications and Image Processing 2003, (23 June 2003); doi: 10.1117/12.502843; https://doi.org/10.1117/12.502843
PROCEEDINGS
8 PAGES


SHARE
Back to Top