In an Image Understanding framework, our aim is to reconstruct an actual indoor scene from a (sequence of) color pair(s) of stereoscopic images. The desired (synthesis-oriented) description requires the analysis of both 3D geometric and photometric parameters in order to use the feedback provided by image synthesis to control the image analysis. The environment model is a hierarchy of polyhedral 3D objects (planar lambertian facets). Two main physical phenomena determine the image intensities: surface reflectance properties and light sources. From illumination models established in Computer Graphics, we derive the appropriate irradiance equations. Rather than use a point source located at infinity, we choose instead isotropic point sources with decreasing energy. This allows us to discriminate small irradiance gradients inside regions. For indoor scenes, such photometric models are more realistic, due to the presence of ceiling lights, desk lamps, and so on. Both a photometric reconstruction algorithm and a technique for localizing the 'dominant' light source are presented along with lighting simulations. For comparison purposes, corresponding artificial images are shown. Using this work, we wish to highlight the fruitful cooperation between the Vision and Graphics domains in order to perform a more accurate scene reconstruction, both photometrically and geometrically. The emphasis is on the illumination characterization which influences the scene interpretation.