Given a CAD model of an object and a set of inspection specifications, we would like to automatically generate the vision procedure to inspect a part that is an instance of the model. Since the position and orientation of the part may be wholly or partially unknown, the first step in the procedure is to determine the pose of the object. Assuming the sensor involved is a CCD camera, this reduces to matching the features extracted from a two-dimensional graytone perspective projection image of the object to the corresponding three-dimensional features of the model. Since 2D to 3D matching is more complex and time consuming than 2D to 2D matching, our preference is to match a data structure representing features and their spatial relationships extracted from the image to a similar 2D data structure generated from the CAD model. Our approach is to use the CAD model to predict the features that will appear in different views of the object under different lightings and use these visible features to generate a set of view classes for use in the matching. A view class is a cluster of views of the object which all produce similar data structures. Then a single representative data structure can represent the entire cluster of views and be used to match to the structure extracted from the image. Important questions that must be answered are 1) how do we predict features from CAD models without generating entire artificial images of the object, 2) what is a good representation for the features extracted from one view, 3) what criteria should be used for forming view classes, and 4) how should the matching from part structure to view class representatives be achieved most efficiently. In this paper we will report on our ongoing research in these areas. Keywords: matching, view class, CAD model, relational pyramid This research was supported by the National Aeronautics and Space Administration (NASA) through a subcontract from Machine Vision International and by Boeing Commercial Aircraft Company.