Humans use their senses, particularly vision, to interrogate the environment in search of information pertinent to the performance of a task. We say that the user has `visual goals', and we associate `visual acts' with these goals. Visual acts are patterns of `looking' displayed in acquiring the information. In this paper we present a model for visual acts which is based on known features of the human visual perception system and to illustrate the model we use as a case study a task which is typical of mechanical manipulation operations. The model is based on human perceptual discrimination and is motivated by a query-based model of the observer.