Remote sensing images acquired in various spectral bands are used to estimate certain geophysical parameters or detect the presence or extent of geophysical phenomena. In general, the raw image acquired by the sensor is processed using various operations such as filtering, compression, enhancement, etc. in order to enhance the utility of the image for a particular application. In performing these operations, the analyst is attempting to maximize the information content in the image to fulfill the end objective. The information content in a remotely sensed image for a specific application is greatly dependent on the gray-scale resolution of the image. Intuitively, as the gray-scale resolution is degraded, the information content of the image is expected to reduce. However, the exact relationship between these parameters is not very clear. For example, while the digital number (DN) of a pixel may change as a result of the decrease in the number of gray scales, it may be possible that the overall image classification accuracy (a measure of information content) may not show a corresponding reduction. Furthermore, the degradation in information content has been shown to be related to the spatial resolution also. Our simulation studies reveal that the information content does indeed drop as the gray-scale resolution degrades. Similar results are observed on working with real images. We have developed a simple mathematical model relating the image information content to the gray-scale resolution, using which the optimal number of gray scales necessary to interpret an image for a particular application may be deduced.