5 October 2017 Automatically assessing properties of dynamic cameras for camera selection and rapid deployment of video content analysis tasks in large-scale ad-hoc networks
Author Affiliations +
Abstract
Video analytics is essential for managing large quantities of raw data that are produced by video surveillance systems (VSS) for the prevention, repression and investigation of crime and terrorism. Analytics is highly sensitive to changes in the scene, and for changes in the optical chain so a VSS with analytics needs careful configuration and prompt maintenance to avoid false alarms. However, there is a trend from static VSS consisting of fixed CCTV cameras towards more dynamic VSS deployments over public/private multi-organization networks, consisting of a wider variety of visual sensors, including pan-tilt-zoom (PTZ) cameras, body-worn cameras and cameras on moving platforms. This trend will lead to more dynamic scenes and more frequent changes in the optical chain, creating structural problems for analytics. If these problems are not adequately addressed, analytics will not be able to continue to meet end users’ developing needs. In this paper, we present a three-part solution for managing the performance of complex analytics deployments. The first part is a register containing meta data describing relevant properties of the optical chain, such as intrinsic and extrinsic calibration, and parameters of the scene such as lighting conditions or measures for scene complexity (e.g. number of people). A second part frequently assesses these parameters in the deployed VSS, stores changes in the register, and signals relevant changes in the setup to the VSS administrator. A third part uses the information in the register to dynamically configure analytics tasks based on VSS operator input. In order to support the feasibility of this solution, we give an overview of related state-of-the-art technologies for autocalibration (self-calibration), scene recognition and lighting estimation in relation to person detection. The presented solution allows for rapid and robust deployment of Video Content Analysis (VCA) tasks in large scale ad-hoc networks.
© (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Richard J. M. den Hollander, Henri Bouma, Jeroen H. C. van Rest, Johan-Martijn ten Hove, Frank B. ter Haar, Gertjan J. Burghouts, "Automatically assessing properties of dynamic cameras for camera selection and rapid deployment of video content analysis tasks in large-scale ad-hoc networks", Proc. SPIE 10441, Counterterrorism, Crime Fighting, Forensics, and Surveillance Technologies, 1044108 (5 October 2017); doi: 10.1117/12.2268797; https://doi.org/10.1117/12.2268797
PROCEEDINGS
17 PAGES


SHARE
RELATED CONTENT

3D ground/air sensor common operating picture
Proceedings of SPIE (May 04 2018)
A framework for detecting hazardous events
Proceedings of SPIE (January 10 2003)
Detecting and tracking humans using a man-portable robot
Proceedings of SPIE (April 30 2009)
Tracking objects with shadows
Proceedings of SPIE (May 07 2003)
WLAN visual sensor networking
Proceedings of SPIE (December 06 2002)

Back to Top