The miniaturization of photodetectors often comes at the expense of a smaller photosensitive area. This can reduce the
signal and thus limit the image quality. One way to overcome this limitation is to reduce the photosensitive area but with
no reduction of signal i.e. harvest the light. Here we investigate, theoretically and experimentally, light harvesting with
nanostructured metals. Nanostructured metals can also give additional functionality such as polarization filtering which
is also investigated. After defining the figure of merits used when characterizing light harvesting and polarization
filtering structures, we detail the fabrication and measurement process. Structures were made on glass substrate, as a post
process step on CMOS fabricated detectors and directly in the CMOS fabrication of the detectors. The optical
characterization results are presented and compared with theory. Finally, we discuss the challenges and advantages of
integrating metallic nanostructures within the CMOS process.
A 128 x 128 pixels, 120 dB vision sensor extracting at the pixel level the contrast magnitude and direction of local image features is used to implement a lane tracking system. The contrast representation (relative change of illumination) delivered by the sensor is independent of the illumination level. Together with the high dynamic range of the sensor, it ensures a very stable image feature representation even with high spatial and temporal inhomogeneities of the illumination. Dispatching off chip image feature is done according to the contrast magnitude, prioritizing features with high contrast magnitude. This allows to reduce drastically the amount of data transmitted out of the chip, hence the processing power required for subsequent processing stages. To compensate for the low fill factor (9%) of the sensor, micro-lenses have been deposited which increase the sensitivity by a factor of 5, corresponding to an equivalent of 2000 ASA. An algorithm exploiting the contrast representation output by the vision sensor has been developed to estimate the position of a vehicle relative to the road markings. The algorithm first detects the road markings based on the contrast direction map. Then, it performs quadratic fits on selected kernel of 3 by 3 pixels to achieve sub-pixel accuracy on the estimation of the lane marking positions. The resulting precision on the estimation of the vehicle lateral position is 1 cm. The algorithm performs efficiently under a wide variety of environmental conditions, including night and rainy conditions.