A lossless image compression algorithm based on a prediction and classification scheme is presented. The algorithm decomposes an image into four subimages by subsampling pixels at even and odd (both row and column) locations. Because the four subimages have strong correlations to one another, one of them is used as a reference in predicting the other three, and the resulting differences between the predicted subimages and the original subimages are encoded. Even though these differences are decorrelated and tend to be random, there is still a relatively large correlation left between the estimated differences and the subimage used in prediction. For example, the estimated differences tend to be large in a high-detailed region, where pixel values change rapidly, whereas the differences are small in a low-detailed region, where pixel values change smoothly and slowly. This redundancy is exploited by dividing the estimated differences into subsets based on a slope change in the subimage used in prediction. The performance of the proposed algorithm using two different predictors, linear interpolation and third-order polynomial interpolation, is compared with that of the hierarchical interpolation (HINT) scheme and Fourier transform interpolation by measuring the first-order entropies of the estimated differences. With a third-order polynomial interpolation and division into two subsets, an average entropy of 3.1 bits/pixel is achieved for the three predicted difference subimages of the 12 bits/pixel x-ray computed tomography images. It is about 0.86 bits/pixel lower in the first-order entropy than the HINT for the three subimages.