This paper presents a model-based object recognition method which combines a bottom-up evidence accumulation process and a top-down hypothesis verification process. The hypothesize-and-test paradigm is fundamental in model-based vision. However, research issues remain on how the bottom-up process gathers pieces of evidence and when the top-down process should take the lead. To accumulate pieces of evidence, we use a configuration space whose points represent a configuration of an object (ie. position and orientation of an object in an image). If a feature is found which matches a part of an object model, the configuration space is updated to reflect the possible configurations of the object. A region in the configuration space where multiple pieces of evidence from such feature-part matches overlap suggests a high probability that the object exists in an image with a configuration in that region. The cost of the bottom-up process to further accumulate evidence for localization, and that of the top-down process to recognize the object by verification, are compared by considering the size of the search region and the probability of success of verification. If the cost of the top-down process becomes lower, hypotheses are generated and their verification processes are started. The first version of the recognition program has been written and applied to the recognition of a jet airplane in synthetic aperture radar (SAR) images. In creating a model of an object, we have used a SAR simulator as a sensor model, so that we can predict those object features which are reliably detectable by the sensors. The program is being tested with simulated SAR images, and shows promising performance.