Visual difference predictors accept two imags as input, performed some processing, and produce a single image as output. The output image represent a map of the visibility of differences between the two input images. In practical applications, input images are received at whatever resolution is convenient for the application, and viewing distances are as appropriate for the task at hand. To match retinal sampling rates, the images are typically filtered and down-sampled. Given that the typically employed optical point spread functions do not completely remove high frequency information, we ask whether, and to what extent high frequency leakage leads to aliasing. In this paper we explore the amount of aliasing possible in our implementation of the Sarnoff Visual Discrimination Model, and describe a modification that uses a sampling grid similar to the Poisson disk distribution. We then compare the result of this sampling to that of the unmodified model.