This paper presents an original approach for a vision-based quality control system, built around a cognitive intelligent sensory system. The principle of the approach relies on two steps. First, a so-called initialization phase leads to structural knowledge on image acquisition conditions, type of illumination sources, etc. Second, the image is iteratively evaluated using this knowledge and complementary information (e.g., CAD models, and tolerance information). Finally, the information describing the quality of the piece under evaluation is extracted. A further aim of the approach is to enable building strategies that determine for instance the “next best view” required for completing the current extracted object description through dynamic adjustment of the knowledge base including this description. Such techniques require primarily investigation of three areas, dealing respectively with intelligent self-reasoning 3D sensors, 3D image processing for accurate reconstruction and evaluation software for comparison of image-based measurements with CAD data. However, an essential prior step, dealing with modeling of lighting effects, is required. As a starting point, we first modeled pinpoint light sources. After having introduced in Sections 1 and 2 the objectives and principles of the approach, we present in Section 3 and 4 the implementation and modeling approach for illumination. Some first results illustrating the approach are presented in Section 5. Finally, we conclude with some future directions for improving this approach.