In computer vision, the idea of using stereo cameras for depth perception has been motivated by the fact that in human vision one percept can arise from two retinal images as a result of the process called "fusion". Nevertheless, most of the stereo algorithms are generally concerned with finding a solution to obtaining depth and three-dimensional shape irrespective of its relevance to the human system. Recent progress in the study of the brain mechanisms of vision has opened new vistas in computer vision research. This paper investigates this knowledge base and its applicability to improving the technique of computer stereo vision. In this regard, (1) a stereo vision model in conjunction with evidences from neurophysiology of the human binocular system is established herein; (2) a computationally efficient algorithm to implement this model is developed. This algorithm has been tested on both computer generated and real scene images. The results from all directional subimages are combined to obtain a complete description of the target surface from disparity measurements.