In this investigation we identify relationships between human and automated face recognition systems with respect to
compression. Further, we identify the most influential <i>scene parameters</i> on the performance of each recognition system.
The work includes testing of the systems with compressed Closed-Circuit Television (CCTV) footage, consisting of
quantified <i>scene (footage) parameters</i>. Parameters describe the content of scenes concerning camera to subject distance,
facial angle, scene brightness, and spatio-temporal busyness. These parameters have been previously shown to affect the
human visibility of useful facial information, but not much work has been carried out to assess the influence they have
on automated recognition systems. In this investigation, the methodology previously employed in the human
investigation is adopted, to assess performance of three different automated systems: Principal Component Analysis,
Linear Discriminant Analysis, and Kernel Fisher Analysis. Results show that the automated systems are more tolerant to
compression than humans. In automated systems, mixed brightness scenes were the most affected and low brightness
scenes were the least affected by compression. In contrast for humans, low brightness scenes were the most affected and
medium brightness scenes the least affected. Findings have the potential to broaden the methods used for testing imaging
systems for security applications.
In this investigation we study the effects of compression and frame rate reduction on the performance of four video
analytics (VA) systems utilizing a low complexity scenario, such as the Sterile Zone (SZ). Additionally, we identify the
most influential scene parameters affecting the performance of these systems. The SZ scenario is a scene consisting of a
fence, not to be trespassed, and an area with grass. The VA system needs to alarm when there is an intruder (attack)
entering the scene. The work includes testing of the systems with uncompressed and compressed (using H.264/MPEG-4
AVC at 25 and 5 frames per second) footage, consisting of quantified scene parameters. The scene parameters include
descriptions of scene contrast, camera to subject distance, and attack portrayal. Additional footage, including only
distractions (no attacks) is also investigated. Results have shown that every system has performed differently for each
compression/frame rate level, whilst overall, compression has not adversely affected the performance of the systems.
Frame rate reduction has decreased performance and scene parameters have influenced the behavior of the systems
differently. Most false alarms were triggered with a distraction clip, including abrupt shadows through the fence.
Findings could contribute to the improvement of VA systems.
This paper proposes a visual scene busyness indicator obtained from the properties of a full spatial segmentation of static images. A fast and effective region merging scheme is applied for this purpose. It uses a semi-greedy merging criterion and an adaptive threshold to control segmentation resolution. The core of the framework is a hierarchical parallel merging model and region reduction techniques. The segmentation procedure consists of the following phases: 1. algorithmic region merging, and 2. region reduction, which includes small segment reduction and enclosed region absorption. Quantitative analyses on standard benchmark data have shown the procedure to compare favourably to other segmentation methods. Qualitative assessment of the segmentation results indicate approximate semantic correlations between segmented regions and real world objects. This characteristic is used as a basis for quantifying scene busyness in terms of properties of the segmentation map and the segmentation process that generates it. A visual busyness indicator based on full colour segmentation is evaluated against conventional measures.
Bayesian Classification methods can be applied to images of watercolour paintings in order to characterize blue
and green pigments used in these paintings. Pigments found in watercolour paintings are semi-transparent
materials and their analysis provides important information on the date, the painter, the place of the production
of watercolour paintings and generally on the authenticity of these works of art. However, watercolour pigments
are difficult to characterize because their intensity depends on the amount of liquid spread during painting and
the reflective properties of the underlying support. The method describedin this paper is non-destructive, non
invasive, does not involve sampling and can be applied in situ. The methodology is based on the photometric
properties of pigments and produce computational models which classify diverse types of pigments found in
watercolour paintings. These pigments are photographed in the visible and infrared area of electromagnetic
spectrum and models based on statistical characteristics of intensity values using a mixture of Gaussian functions
are created. Finally the pigments are classified using a Bayesian classification algorithm to process the generate models.
Inks constitute the main element in Medieval manuscripts and their examination and analysis provides an invaluable source of information on the authenticity of the manuscripts, the number of authors involved and dating of the manuscripts. Most existing methods for the analysis of ink materials are based on destructive testing techniques that require the physicochemical sampling of data. Such methods cannot be widely used because of the historical and cultural value of manuscripts. In this paper we present a novel approach for discriminating and identifying inks based on the correlations of image variations under visible and infrared illumination. Such variations are studied using co-occurrence matrices and detect the behavior of the inks during the scripting process.
The aim of the research introduced in this paper is to develop a unified neural network platform to model the behavior of cancerous cells. Neural networks are used to both extract features from the cell images and control the process for recognizing whether a sequence of cell images match the locomotive and social behavior of a cancerous cell and its probability to metastasize. The problem first tackled is the extraction of cell features from the images as, for example, the center of the cell. This paper gives an overview of the application and presents the results drawn from two neural network architectures, an `all connected' and a`locally connected' network, used for the extraction of cell centroid areas from images. Both networks are implemented on a distributed array of processors (DAP) and trained using the backpropagation learning algorithm.