Visual surveillance systems provide real time monitoring of the events or the environment. The availability of low cost sensors and processors has increased the number of possible applications of these kinds of systems. However, designing an optimized visual surveillance system for a given application is a challenging task, which often becomes a unique design task for each system. Moreover, the choice of components for a given surveillance application out of a wide spectrum of available alternatives is not an easy job. In this paper, we propose to use a general surveillance taxonomy as a base to structure the analysis and development of surveillance systems. We demonstrate the proposed taxonomy for designing a volumetric surveillance system for monitoring the movement of eagles in wind parks aiming to avoid their collision with wind mills. The analysis of the problem is performed based on taxonomy and behavioral and implementation models are identified to formulate the solution space for the problem. Moreover, we show that there is a need for generalized volumetric optimization methods for camera deployment.
Wireless Visual Sensor Networks (WVSN) is formed by deploying many Visual Sensor Nodes (VSNs) in the field.
Typical applications of WVSN include environmental monitoring, health care, industrial process monitoring,
stadium/airports monitoring for security reasons and many more. The energy budget in the outdoor applications of
WVSN is limited to the batteries and the frequent replacement of batteries is usually not desirable. So the processing as
well as the communication energy consumption of the VSN needs to be optimized in such a way that the network
remains functional for longer duration. The images captured by VSN contain huge amount of data and require efficient
computational resources for processing the images and wide communication bandwidth for the transmission of the
results. Image processing algorithms must be designed and developed in such a way that they are computationally less
complex and must provide high compression rate. For some applications of WVSN, the captured images can be
segmented into bi-level images and hence bi-level image coding methods will efficiently reduce the information amount
in these segmented images. But the compression rate of the bi-level image coding methods is limited by the underlined
compression algorithm. Hence there is a need for designing other intelligent and efficient algorithms which are
computationally less complex and provide better compression rate than that of bi-level image coding methods. Change
coding is one such algorithm which is computationally less complex (require only exclusive OR operations) and provide
better compression efficiency compared to image coding but it is effective for applications having slight changes
between adjacent frames of the video. The detection and coding of the Region of Interest (ROIs) in the change frame
efficiently reduce the information amount in the change frame. But, if the number of objects in the change frames is
higher than a certain level then the compression efficiency of both the change coding and ROI coding becomes worse
than that of image coding. This paper explores the compression efficiency of the Binary Video Codec (BVC) for the data
reduction in WVSN. We proposed to implement all the three compression techniques i.e. image coding, change coding
and ROI coding at the VSN and then select the smallest bit stream among the results of the three compression
techniques. In this way the compression performance of the BVC will never become worse than that of image coding.
We concluded that the compression efficiency of BVC is always better than that of change coding and is always better
than or equal that of ROI coding and image coding.
The current trend in embedded vision systems is to propose bespoke solutions for specific problems as each application
has different requirement and constraints. There is no widely used model or benchmark which aims to facilitate generic
solutions in embedded vision systems. Providing such model is a challenging task due to the wide number of use cases,
environmental factors, and available technologies. However, common characteristics can be identified to propose an
abstract model. Indeed, the majority of vision applications focus on the detection, analysis and recognition of objects.
These tasks can be reduced to vision functions which can be used to characterize the vision systems. In this paper, we
present the results of a thorough analysis of a large number of different types of vision systems. This analysis led us to
the development of a system’s taxonomy, in which a number of vision functions as well as their combination
characterize embedded vision systems. To illustrate the use of this taxonomy, we have tested it against a real vision
system that detects magnetic particles in a flowing liquid to predict and avoid critical machinery failure. The proposed
taxonomy is evaluated by using a quantitative parameter which shows that it covers 95 percent of the investigated vision
systems and its flow is ordered for 60 percent systems. This taxonomy will serve as a tool for classification and
comparison of systems and will enable the researchers to propose generic and efficient solutions for same class of
Wireless Visual Sensor Network (WVSN) is an emerging field which combines image sensor, on board
computation unit, communication component and energy source. Compared to the traditional wireless sensor
network, which operates on one dimensional data, such as temperature, pressure values etc., WVSN operates on
two dimensional data (images) which requires higher processing power and communication bandwidth.
Normally, WVSNs are deployed in areas where installation of wired solutions is not feasible. The energy budget
in these networks is limited to the batteries, because of the wireless nature of the application. Due to the limited
availability of energy, the processing at Visual Sensor Nodes (VSN) and communication from VSN to server
should consume as low energy as possible. Transmission of raw images wirelessly consumes a lot of energy and
requires higher communication bandwidth. Data compression methods reduce data efficiently and hence will be
effective in reducing communication cost in WVSN. In this paper, we have compared the compression
efficiency and complexity of six well known bi-level image compression methods. The focus is to determine the
compression algorithms which can efficiently compress bi-level images and their computational complexity is
suitable for computational platform used in WVSNs. These results can be used as a road map for selection of
compression methods for different sets of constraints in WVSN.
There are a number of challenges caused by the large amount of data and limited resources when implementing vision
systems on wireless smart cameras using embedded platforms. Generally, the common challenges include limited
memory, processing capability, the power consumption in the case of battery operated systems, and bandwidth. It is
usual for research in this field to focus on the development of a specific solution for a particular problem. In order to
implement vision systems on an embedded platform, the designers must firstly investigate the resource requirements for
a design and, indeed, failure to do this may result in additional design time and costs so as to meet the specifications.
There is a requirement for a tool which has the ability to predict the resource requirements for the development and
comparison of vision solutions in wireless smart cameras. To accelerate the development of such tool, we have used a
system taxonomy, which shows that the majority of vision systems for wireless smart cameras are common and these
focus on object detection, analysis and recognition. In this paper, we have investigated the arithmetic complexity and
memory requirements of vision functions by using the system taxonomy and proposed an abstract complexity model. To
demonstrate the use of this model, we have analysed a number of implemented systems with this model and showed that
complexity model together with system taxonomy can be used for comparison and generalization of vision solutions.
The study will assist researchers/designers to predict the resource requirements for different class of vision systems,
implemented on wireless smart cameras, in a reduced time and which will involve little effort. This in turn will make the
comparison and generalization of solutions simple for wireless smart cameras.
A Visual Sensor Network (VSN) is a network of spatially distributed cameras. The primary difference between VSN and
other type of sensor networks is the nature and volume of information. A VSN generally consists of cameras,
communication, storage and central computer, where image data from multiple cameras is processed and fused. In this
paper, we use optimization techniques to reduce the cost as derived by a model of a VSN to track large birds, such as
Golden Eagle, in the sky. The core idea is to divide a given monitoring range of altitudes into a number of sub-ranges of
altitudes. The sub-ranges of altitudes are monitored by individual VSNs, VSN1 monitors lower range, VSN2 monitors
next higher and so on, such that a minimum cost is used to monitor a given area. The VSNs may use similar or different
types of cameras but different optical components, thus, forming a heterogeneous network. We have calculated the cost
required to cover a given area by considering an altitudes range as single element and also by dividing it into sub-ranges.
To cover a given area with given altitudes range, with a single VSN requires 694 camera nodes in comparison to
dividing this range into sub-ranges of altitudes, which requires only 88 nodes, which is 87% reduction in the cost.