Rosetta is one of the cornerstone missions of the European Space Agency for having a rendezvous with the comet 67P/Churyumov-Gerasimenko in 2014. The imaging instrument on board the satellite is OSIRIS (Optical, Spectroscopic and Infrared Remote Imaging System), a cooperation among several European institutes, which consists of two cameras: a Narrow (NAC) and a Wide Angle Camera (WAC). <p> </p>The WAC optical design is an innovative one: it adopts an all reflecting, unvignetted and unobstructed two mirror configuration which allows to cover a 12° × 12° field of view with an F/5.6 aperture and gives a nominal contrast ratio of about 10<sup>–4</sup>. <p> </p>The flight model of this camera has been successfully integrated and tested in our laboratories, and finally has been integrated on the satellite which is now waiting to be launched in February 2004. <p> </p>In this paper we are going to describe the optical characteristics of the camera, and to summarize the results so far obtained with the preliminary calibration data. The analysis of the optical performance of this model shows a good agreement between theoretical performance and experimental results.
Object shape and camera motion can be recovered from a sequence of images using a set of feature point correspondences. This is known as the structure from motion problem. This paper describes a method of employing geometrical features available in a scene, in the form of straight lines, in a factorization-based structure from motion application. The effects of inaccuracies of feature data can be reduced by constraining the reconstructed features corresponding to the points forming straight lines. Our main contribution in this paper is the use of such geometric features to refine the shape recovery using the current advancements in the factorization method. Reconstructed features are mapped to straight lines and the measurement matrix containing image feature data is updated with the adjusted data. This increases the accuracy of reconstruction perceptually as well as quantitatively. The algorithm consists of first obtaining the reconstruction using singular value decomposition. Mapping 3D lines to sets of feature points is then carried out. The measurement matrix is refined followed by a second phase of factorization and, optionally, normalization to obtain metric reconstruction. Results pertaining to both synthetic and actual sequences of images are presented.
Focus of attention is often attributed to biological vision system where the entire field of view is first monitored and then the attention is focused to the object of interest. We propose using a similar approach for object recognition in a color image sequence. The intention is to locate an object based on a prior motive, concentrate on the detected object so that the imaging device can be guided toward it. We use the abilities of the intelligent image analysis framework developed in our laboratory to generate an algorithm dynamically to detect the particular type of object based on the user's object description.
The proposed method uses color clustering along with segmentation. The segmented image with labeled regions is used to calculate the shape descriptor parameters. These and the color information are matched with the input description. Gaze is then controlled by issuing camera movement commands as appropriate.
We present some preliminary results that demonstrate the success of this approach.