Assessing the perceptual quality of pictures still remains a difficult task even for humans. This is true, especially when
there are many interesting regions to look at (e.g. sea and foreground subject) or when the differences among the pictures
are subtle. Despite that, trends in user preference do exist and they can be a valuable source of information for designing
enhancement algorithms. However, a major problem is to assess preference trends and to translate them in an algorithm
with a formal methodology. The approach that we describe in this paper proposes a multi-step solution. In the first
instance we relate the space of possible enhancement sequences (intended as chain of enhancement algorithms) to the
content of the image and then reduce the number of sequences through an iterative selection penalizing the sequences
that produce artifacts or that generates close results. We then present the user with pairs of images enhanced with the
various sequences and we ask to select the best in each comparison. Finally, we perform a statistical analysis of users'
votes through a statistical method. Preliminary results show preference for saturated and colorful sea and sky and "de-saturated"
The presented work describes an innovative architecture for multi-sensor distributed video surveillance applications. The aim of the system is to track moving objects in outdoor environments with a cooperative strategy exploiting two video cameras. The system also exhibits the capacity of focusing its attention on the faces of detected pedestrians collecting snapshot frames of face images, by segmenting and tracking them over time at different resolution.
The system is designed to employ two video cameras in a cooperative client/server structure: the first camera monitors the entire area of interest and detects the moving objects using change detection techniques. The detected objects are tracked over time and their position is indicated on a map representing the monitored area. The objects’ coordinates are sent to the server sensor in order to point its zooming optics towards the moving object. The second camera tracks the objects at high resolution. As well as the client camera, this sensor is calibrated and the position of the object detected on the image plane reference system is translated in its coordinates referred to the same area map. In the map common reference system, data fusion techniques are applied to achieve a more precise and robust estimation of the objects’ track and to perform face detection and tracking. The work novelties and strength reside in the cooperative multi-sensor approach, in the high resolution long distance tracking and in the automatic collection of biometric data such as a person face clip for recognition purposes.
This paper proposes novel technique for elimination of moving objects self-generated illumination variations with respect to a fixed reference frame (background). In particular, proposed techniques can be used by a video-based surveillance system able to automatically detect potentially dangerous or particular situations happening within a guarded area. Proposed techniques are based on chromatic properties of cast shadows and illumination and achieved results demonstrate that the proposed approach can greatly improve performances of a video-surveillance system in term of the precision of detected objects. Presented processing techniques are necessary in the case of a real system that must be able to work with good performances twenty-four hours a day.