First generation image compression methods using block-based DCT or wavelet transforms compressed all image blocks with a uniform compression ratio. Consequently, any regions of special interest were degraded along with the remainder of the image. Second generation image compression methods apply object-based compression techniques in which each object is first segmented and then encoded separately. Content-based compression further improves on object-based compression by applying image understanding techniques. First, each object is recognized or classified, and then different objects are compressed at different compression rates according to their priorities. Regions with higher priorities (such as objects of interest) receive more encoding bits as compared to less important regions, such as the background. The major difference between a content-based compression algorithm and conventional block-based or object-based compression algorithms is that content-based compression replaces the quantizer with a more sophisticated classifier. In this paper we describe a technique in which the image is first segmented into regions by texture and color. These regions are then classified and merged into different objects by means of a classifier based on its color, texture and shape features. Each object is then transformed by either DCT or Wavelets. The resulting coefficients are encoded to an accuracy that minimizes recognition error and satisfies alternative requirements. We employ the Chernoff bound to compute the cost function of the recognition error. Compared to the conventional image compression methods, our results show that content-based compression is able to achieve more efficient image coding by suppressing the background while leaving the objects of interest virtually intact.