This paper presents a method for perceptual video compression that exploits the phenomenon of backward temporal
masking. We present an overview of visual temporal masking and discuss models to identify portions of a video
sequences masked due to this phenomenon exhibited by the human visual system. A quantization control model based
on the psychophysical model of backward visual temporal masking was developed. We conducted two types of
subjective evaluations and demonstrated that the proposed method up to 10% bitrate savings on top of state of the art
encoder with visually identical video. The proposed methods were evaluated using HEVC encoder.
Evidence is provided for independent motion pathways that can serve to discriminate the motion of objects from the optic flow produced by the perceiver's egomotion, the latter based on detecting motion energy. Motion energy models are founded on the idea that low-level motion perception entails the detection of spatiotemporal changes in raw luminance (i.e., oriented energy), irrespective of the boundaries that segregate objects from their background and/or delineate the parts of objects. In the current study, it was shown that the distinction between motion based on detecting an object's edges and motion based on detecting motion energy corresponds to Wertheimer's distinction between beta motion and objectless phi motion. Evidence came from a stimulus for which luminance increments spread in one direction, but in a way that created stimulus information specifying successive edge motions in the opposite direction. Objectless phi motion is perceived only for brief frame durations (high speeds). Beta motion is perceived for relatively long frame durations (slower speeds) when luminance contrast decreases at one edge and simultaneously increases at another. These results, which cannot be accounted for by attentive feature tracking, indicate that there are independent mechanisms for detecting object motion and detecting objectless motion energy.