SIPHER was first revealed in a US Air Force Research Laboratory Information Directorate (AFRL/RIEC)
project concerned with polarimetric and SAR processing techniques. It is a means to make objects in a digital image
vary in intensity (amplitude) with respect to other objects or backgrounds, in an unusual manner which promotes object
or target cognitive perception. We describe this phenomenon as objects being in or out of spatial intensity phase with
one another, somewhat analogous to how different signals' amplitudes differ at any instance due to their relative phases.
Simple surface reflectivity and a single, static illumination source provide no special means to distinguish
objects from backgrounds, other than their reflectivity differences. However, if different surfaces are illuminated from
different source positions or with different amplitudes, like from a moving spotlight, different pixels with the same
reflectivity may have different amplitudes at different instances within the source's dynamic behavior. The problem is
that we cannot necessarily control source dynamics or collect images over sufficient time to benefit from these dynamics.
SIPHER simulates source dynamics in a single, static image. It creates apparent reflectivity changes in an
image taken at one instance, as if the illumination source's intensity and position was changing, as a function of
algorithm threshold settings. This produces a series of processed images wherein object and background pixel
amplitudes are out of phase with one another due to their orientation and surface characteristics (flat, curved, etc.), and
become more perceptible. Cognitive perception is enhanced by creating a video sequence of the processed image series.
This produces an apparent motion effect in the object relative to its surroundings, or renders an apparent threedimensional
effect where the object appears to "jump out" from its surroundings.
We first define this spatial intensity phase quantity mathematically, then compare it to conventional signal
phase relationships, and finally apply it to some images to demonstrate its behavior. We also discuss anticipated
enhancement and normalization techniques which may improve the technique in the future.