In traditional histopathology images, and emerging ex vivo microscopy techniques that generate large (gigapixel) image data, new strategies are needed to efficiently search the images to identify salient areas for human recognition or computer-aided diagnosis, to aid in rapid and efficient review workflows. One strategy is to learn from pathologists to develop model observers that can predict the most significant areas of an image for human review or application of computationally-expensive diagnostic algorithms. To get further understanding of the pathologists’ perception and cognition, we developed a custom web-based multiresolution viewer based on OpenSeadragon, which can record view coordinate, zoom level, coordinate dwell time, and coordinate scan path of users viewing images, under the assumption that the view coordinate represents the real-time visual area. We conducted experiments on two types of data with multiple reviewers: 1) traditional histopathology images of radical prostatectomy specimens, and 2) whole surface images of prostate margins collected on intact organs using structured illumination microscopy. Overall error rate, viewing pattern, saliency maps, dwell time and zoom level on fixation clusters were analyzed on normal tissue and cancer loci/positive margins. The result of the pilot experiments demonstrates that saliency maps correspond well with known areas of tumor in histopathology images and residual cancer in tumor margin images. These data will be used to predict saliency maps on new images based on low-level image features and to test accuracy vs. expert reviewers. This tool has promise for automated image dimensionality reduction and diagnosis of histopathology images.