Multi-spectral camera set-ups may generally allow for creating surveillance applications even under unfavorable conditions, such as low-light environments or scenes involving vastly different lighting conditions. A high- resolution color camera, a high-dynamic-range camera and an infrared thermal camera were combined into a self-sufficient platform for continuous outdoor operation. The sheer amount of produced data poses a serious challenge, both in terms of available bandwidth and processing power, because self-sufficiency requires using relatively low-power components, and privacy, as high-resolution, multi-spectral image data are sensitive information. Thus, relevant objects of interest had to be efficiently extracted, tracked and georeferenced on the sensor platform. These data, from one or more sensorheads, are then sent via WLAN or mobile data link to a central control unit, possibly anonymized, e.g. prompting immediate action by a human operator in a disaster response use case, or stored for further offline analysis when used in the framework of "Smart City". Applying the classic stereo vision approach would require calibrating both intrinsic and extrinsic parameters of all cameras. The input data's multi-spectral nature complicates the correspondence problem for extrinsic parameter calibration and subsequent stereo matching, while intrinsic parameter calibration according to the pinhole camera model is made difficult due to the cameras having to be focused at infinity. However, by making certain reasonable assumptions about the observed scene in typical use cases, accepting a possible loss in localization accuracy, camera calibration could be limited to the bare minimum and less computational power was required at run-time.
KEYWORDS: Image processing, Visual process modeling, Signal processing, Sensors, Analog electronics, Image sensors, Image acquisition, RGB color model, Camera shutters, Computer programming
Image sensors with integrated, programmable signal processing execute computationally intensive processing steps during or immediately after image acquisition, thereby allowing for reducing output data to relevant features only. In contrast to conventional image processing systems, the tasks of image acquisition and actual image processing in such a “vision chip” cannot be viewed independently of each other. Both for validating the architecture and supporting programming in the course of application development, modeling on the system level has been performed as part of the design process of the vision-system-on-chip. Apart from the implementation of all essential components of the integrated control unit as well as digital and analog signal processing, special attention has been paid to the integration into the development environment. Being able to purposefully insert parameter deviations and/or defects at different points of the analog processing enables investigations with respect to their influence on image processing algorithms performed on the image sensor. Due to its high simulation speed and compatibility to the real system, especially regarding the to-be-executed programs, the resulting simulation model is very well suited for use in application development.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.