Any effort to develop efficient schemes for image representation must begin by pondering the nature of image structure and image information. The fundamental insight which makes compact coding possible is that the statistical complexity of images does not correspond to their resolution (number of resolvable states) if they contain nonrandom structure, coherence, or local auto-correlation. These are respects in which real images differ from random noise: they are optical projections of 3-D objects whose physical constitution and material unity ensure locally homogeneous image structure, whether such local correlations are as simple as luminance value, or a more subtle textural signature captured by some higher-order statistic. Except in the case of synthetic white noise, it is not true that each pixel in an image is statistically independent from its neighbors and from every other pixel; yet that is the default assumption in the standard image representations employed in video transmission channels or the data structures of storage devices. This statistical fact - that the entropy of the channel vastly exceeds the entropy of the signal - has long been recognized, but it has proven difficult to reduce channel bandwidth without loss of resolution. In practical terms, the consequence is that the video data rates (typically 8 bits for each one of several hundred thousand pixels in an image mosaic, resulting in information bandwidths in the tens of millions of bits per second) are far more costly informationally than they need to be, and moreover, no image structure more complex than a single pixel at a time is explicitly extracted or encoded.