A computational visual system (CVS) has been developed that segments objects in natural scenes using algorithms and filtering elements similar to those used by people. The filtering elements of the CVS are based on neural networks elucidated by physiological and anatomical studies. The algorithms of the CVS are based on data from psychophysical studies. This CVS classifies different types of patterns, based on object shape, texture, position in the visual field, and amount of motion parallax in subsequent scenes, without any a priori models. When analyzing 3D scenes, psychophysical and physiological evidence indicate that people construct an object-based perception, one that is event-driven. The object-based representation being modeled focuses on the object formation found in the dorsal cortical pathway, used to locate an object in 3D space. Therefore, the interaction between the eye-head movement system and the pattern recognition system is modeled. Global scene attributes, used to reveal objects masked by shadows and improve object segmentation, and local object attributes defined by the boundary of contrast differences between an object and its background are modeled. The importance of using paired odd and even symmetric detectors to form the boundary and analyze the texture of an object is emphasized. This information is used to construct a viewer- centered object-based map of the scene that is based on multiple object attributes. Algorithms that incorporate the relative weighting of the different object attributes being used to discriminate objects are used to instantiate computational networks that incorporate both competitive and cooperative networks.