In Parts 1 and 2 of this three-part series, we show that the processing of compressed imagery can yield computational efficiency, due to the presence of fewer data. In Part 1, we show that such compressive operations can simulate an image- domain operation such that the output of the compressive operation, when decompressed, equals or approximates the output of the corresponding image operation. Additionally, we present unifying theory that portrays the derivation of compressive operations at a high level for image operations such as pointwise, global reduce (e.g., image summation or maximum), and image-template (e.g., linear convolution) operations. Further discussion and analysis concerned formulations of block truncation coding (BTC) and visual pattern image coding (VPIC) compressive transforms. In Part 2, we analyze high-level formulations of the vector quantization (VQ) and JPEG compression transforms. We further illustrate the utility of our high-level derivational methods by demonstrating the derivation and operation of several pixel-level operations over VPIC- and BTC-compressed imagery. In this paper, we extend our previous derivations to include image processing operations such as edge detection and smoothing, as well as higher- level operations such as target classification and connected component labeling. Analyses emphasize computational efficiency, as well as effects of information loss and computational error. Our algorithms are expressed in terms of image algebra, a rigorous, concise notation that unifies linear and nonlinear mathematics in the image domain. Since image algebra has been implemented on numerous sequential and parallel computers, our algorithms are feasible and widely portable.