9.1 Image Processing in the Imaging Chain
The output of the digital sensor is a "raw" digital image that consists of an array of digital count values with each value representing the brightness, or gray level, of a pixel in the image. Image processing is generally employed in the imaging chain to improve the efficacy of the image data (Fig. 9.1). Although image processing is a very broad field that includes compression, feature detection, and classification,1,2 we will focus our discussion here on the common processing methods used to enhance the visual quality of the image. Specifically, we will first look at contrast enhancement methods, and then at spatial filtering methods that sharpen edges and remove much of the image blur. (Detector calibration is usually the first step of the image enhancement chain, but this was discussed earlier as part of the sensor modeling.) For simplicity, we will assume that the images have an eight-bit dynamic range; i.e., there are 28 = 256 possible gray levels, so the gray levels in the image will be in the range 0-255, with zero being black and 255 being white. Color images have three arrays of numbers typically representing the red, green, and blue images that are combined to give the full spectrum of colors. We will focus on processing single-band images, i.e., black and white images.
9.2 Contrast Enhancements
Contrast enhancements improve the perceptibility of objects in the scene by enhancing the brightness difference between objects and their backgrounds. Contrast enhancements are typically performed as a contrast stretch followed by a tonal enhancement, although these could both be performed in one step. A contrast stretch improves the brightness differences uniformly across the dynamic range of the image, whereas tonal enhancements improve the brightness differences in the shadow (dark), midtone (grays), or highlight (bright) regions at the expense of the brightness differences in the other regions.