Although dynamic adaptive video streaming over http (DASH) has developed as a most subtle technology that can be used for the transmission of live and on-demand audio and video content over any IP network, the design of video segment size is an important aspect, as it varies from one technology to another. We proposed a method to investigate the effect of changing the buffer size, as it was configured to be dynamically adapted to the segment size. Our proposed method also retrieves the most appropriate video representation based on the available bandwidth compared to the size of the video representations. We expose an empirical study for different segment sizes (i.e. 1,2,5,10,15 and 20 seconds) striving for the best available quality. An objective evaluation was carried out in relation to study the impact of the segment size while streaming video. From the tests carried out, the larger segment size, the better PSNR value; however, it produces higher Initial delay. In our obtained results, segment size of 20 seconds has the highest PSNR value at 45.7dB; whereas segment size of 1 second has the lowest initial delay at 1.2 seconds.
Falls are the most critical health problem for elderly people, which are often, cause significant injuries. To tackle a serious risk that made by the fall, we develop an automatic wearable fall detection system utilizing two devices (mobile phone and wireless sensor) based on three axes accelerometer signals. The goal of this study is to find an effective machine learning method that distinguish falls from activities of daily living (ADL) using only a single triaxial accelerometer. In addition, comparing the performance results for wearable sensor and mobile device data .The proposed model detects the fall by using seven different classifiers and the significant performance is demonstrated using accuracy, recall, precision and F-measure. Our model obtained accuracy over 99% on wearable device data and over 97% on mobile phone data.
A fully automated volumetric image segmentation algorithm is proposed which uses Bayesian inference to assess the appropriate number of image segments. The segmentation is performed exclusively within the wavelet domain, after the application of the redundant <i>a trous </i>wavelet transform employing four decomposition levels. This type of analysis allows for the evaluation of spatial relationships between objects in an image at multiple scales, exploiting the image characteristics matched to a particular scale. These could possibly go undetected in other analysis techniques. The Bayes Information Criterion (BIC) is calculated for a range of segment numbers with a relative maximum determining optimal segment number selection. The fundamental idea of the BIC is to approximate the integrated likelihood in the Bayes factor and then ignore terms which do not increase quickly with N, where N is the cardinality of the data. Gaussian Mixture Modelling (GMM) is then applied to an individual mid-level wavelet scale to achieve a baseline scene estimate considering only voxel intensities. This estimate is then refined using a series of wavelet scales in a multiband manner to reflect spatial and multiresolution correlations within the image, by means of a Markov Random Field Model (MRFM). This approach delivers promising results for a number of volumetric brain MR and PET images, with inherent image features being identified. Results achieved largely correspond with those obtained by researchers in biomedical imaging utilising manually defined parameters for image modelling.
Matrix algorithms are important in many types of applications including image and signal processing. These areas require enormous computing power. A close examination of the algorithms used in these, and related, applications reveals that many of the fundamental actions involve matrix operations such as matrix multiplication which is of <i>O (N<sup>3</sup>)</i> on a sequential computer and <i>O (N<sup>3</sup>/p)</i> on a parallel system with <i>p</i> processors complexity. This paper presents an investigation into the design and implementation of different matrix algorithms such as matrix operations, matrix transforms and matrix decompositions using an FPGA based environment. Solutions for the problem of processing large matrices have been proposed. The proposed system architectures are scalable, modular and require less area and time complexity with reduced latency when compared with existing structures.
Intelligent based systems using methods such as local search meta-heuristic techniques, neural networks, genetic algorithms, genetic programming etc... are applied to many diverse application areas such as image processing and reconfigurable computing. Though great success has been gained by their application in such areas only recently has work been undertaken on their application to the area of reconfigurable hardware. The research presented here intends to use the strengths of these systems both to schedule work on an architecture as well as automatically designing architectures for optimum processing capability. An Intelligent Technique (IT) is used to automatically reconfigure the proposed Systolic Architectures (SA) for the implementation of matrix-based algorithms while a Heuristic Approach (HA) is used to optimize the implementation of the proposed designs on Field Programmable Gate Arrays (FPGAs).