The wavelet transform is a powerful tool for image and video processing, useful in a range of applications. This
paper is concerned with the efficiency of a certain fast-wavelet-transform (FWT) implementation and several
wavelet filters, more suitable for constrained devices. Such constraints are typically found on mobile (cell) phones
or personal digital assistants (PDA). These constraints can be a combination of; limited memory, slow floating
point operations (compared to integer operations, most often as a result of no hardware support) and limited
local storage. Yet these devices are burdened with demanding tasks such as processing a live video or audio
signal through on-board capturing sensors.
In this paper we present a new wavelet software library, HeatWave, that can be used efficiently for image/video
processing/analysis tasks on mobile phones and PDA's. We will demonstrate that HeatWave is suitable for realtime
applications with fine control and range to suit transform demands. We shall present experimental results
to substantiate these claims. Finally this library is intended to be of real use and applied, hence we considered
several well known and common embedded operating system platform differences; such as a lack of common
routines or functions, stack limitations, etc. This makes HeatWave suitable for a range of applications and
Mobile Phones and other hand held devices are constrained in their memory and computational power, and yet new generations of theses devices provide access to the web-based services and are equipped with digital cameras that make them more attractive to users. These added capabilities are expected to help incorporate such devices into the global communication system. In order to take advantage of these capabilities, there are desperate need for highly efficient algorithms including real-time image and video processing and transmission. This paper is concerned with high quality video compression for constrained mobile devices. We attempt to tweak a wavelet-based feature-preserving image compression technique that we have developed recently, so as to make it suitable for implementation on mobile phones and PDA's. The earlier version of the compression algorithm exploits the statistical properties of the multi-resolution wavelet-transformed images. The main modification is based on the observation that in many cases the statistical parameters of wavelet subbands of adjacent video frames do not differ significantly. We shall investigate the possibility of re-using codebooks for a sequence of adjacent frames without having adverse effect on image quality if any. Such an approach results in significant bandwidth and processing-time savings. The performance of this scheme will be tested in comparison to other video compression methods. Such a scheme is expected to be of use in security applications such as transmission of biometric data for a server-based verification.
Verification of a person's identity by the combination of more than one biometric trait strongly increases the robustness of person authentication in real applications. This is particularly the case in applications involving signals of degraded quality, as for person authentication on mobile platforms. The context of mobility generates degradations of input signals due to the variety of environments encountered (ambient noise, lighting variations, etc.), while the sensors' lower quality further contributes to decrease in system performance. Our aim in this work is to combine traits from the three biometric modalities of speech, face and handwritten signature in a concrete application, performing non intrusive biometric verification on a personal mobile device (smartphone/PDA).
Most available biometric databases have been acquired in more or less controlled environments, which makes it difficult to predict performance in a real application. Our experiments are performed on a database acquired on a PDA as part of the SecurePhone project (IST-2002-506883 project "Secure Contracts Signed by Mobile Phone"). This database contains 60 virtual subjects balanced in gender and age. Virtual subjects are obtained by coupling audio-visual signals from real English speaking subjects with signatures from other subjects captured on the touch screen of the PDA. Video data for the PDA database was recorded in 2 recording sessions separated by at least one week. Each session comprises 4 acquisition conditions: 2 indoor and 2 outdoor recordings (with in each case, a good and a degraded quality recording). Handwritten signatures were captured in one session in realistic conditions. Different scenarios of matching between training and test conditions are tested to measure the resistance of various fusion systems to different types of variability and different amounts of enrolment data.