21 June 2000 Smart sensors: lessons learned from computer vision
Author Affiliations +
This paper describes recent work in the field of computer vision and relates the results to the much broader class of smart sensors. The paper begins with an overview of recent work concerning the combination of chemical sensors with neural networks, Such devices allow classification of samples into distinct states, forming, to give one example, an electronic nose. These often require a broad selectivity and are formed from small arrays of sensor elements. This is followed by a description of two aspects of the authors' own work in the field of computer vision. One is an automatic control system of a micro robot-based microassembly station using computer vision. The other concerns that automatic recognition of objects regardless of the scale of the object. For the latter, we have shown that when the senor input noise is taken into consideration, a conventional CCD array is unable to provide a robust representation of an object, regardless of scale. In contrast to this, biological based retinal arrays are able to achieve this. The paper concludes with the perspective that all sensor systems are data dependent. This is of little concern if the sensor consists of a single element, but becomes more important as larger arrays are fabricated. These sensor arrays may have to emulate biological systems, in an analogous manner to a retinal camera.
© (2000) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Robert B. Yates, Robert B. Yates, Stuart Meikle, Stuart Meikle, } "Smart sensors: lessons learned from computer vision", Proc. SPIE 3990, Smart Structures and Materials 2000: Smart Electronics and MEMS, (21 June 2000); doi: 10.1117/12.388888; https://doi.org/10.1117/12.388888

Back to Top