Hyperspectral Imaging (HI) assembles high resolution spectral information from hundreds of narrow bands across the electromagnetic spectrum, thus generating 3D data cubes in which each pixel gathers the spectral information of the reflectance of every spatial pixel. As a result, each image is composed of large volumes of data, which turns its processing into a challenge, as performance requirements have been continuously tightened. For instance, new HI applications demand real-time responses. Hence, parallel processing becomes a necessity to achieve this requirement, so the intrinsic parallelism of the algorithms must be exploited. In this paper, a spatial-spectral classification approach has been implemented using a dataflow language known as RVCCAL. This language represents a system as a set of functional units, and its main advantage is that it simplifies the parallelization process by mapping the different blocks over different processing units. The spatial-spectral classification approach aims at refining the classification results previously obtained by using a K-Nearest Neighbors (KNN) filtering process, in which both the pixel spectral value and the spatial coordinates are considered. To do so, KNN needs two inputs: a one-band representation of the hyperspectral image and the classification results provided by a pixel-wise classifier. Thus, spatial-spectral classification algorithm is divided into three different stages: a Principal Component Analysis (PCA) algorithm for computing the one-band representation of the image, a Support Vector Machine (SVM) classifier, and the KNN-based filtering algorithm. The parallelization of these algorithms shows promising results in terms of computational time, as the mapping of them over different cores presents a speedup of 2.69x when using 3 cores. Consequently, experimental results demonstrate that real-time processing of hyperspectral images is achievable.
Hyperspectral imaging (HI) collects information from across the electromagnetic spectrum, covering a wide range of wavelengths. The tremendous development of this technology within the field of remote sensing has led to new research fields, such as cancer automatic detection or precision agriculture, but has also increased the performance requirements of the applications. For instance, strong time constraints need to be respected, since many applications imply real-time responses. Achieving real-time is a challenge, as hyperspectral sensors generate high volumes of data to process. Thus, so as to achieve this requisite, first the initial image data needs to be reduced by discarding redundancies and keeping only useful information. Then, the intrinsic parallelism in a system specification must be explicitly highlighted.<p> </p> In this paper, the PCA (Principal Component Analysis) algorithm is implemented using the RVC-CAL dataflow language, which specifies a system as a set of blocks or actors and allows its parallelization by scheduling the blocks over different processing units. Two implementations of PCA for hyperspectral images have been compared when aiming at obtaining the first few principal components: first, the algorithm has been implemented using the Jacobi approach for obtaining the eigenvectors; thereafter, the NIPALS-PCA algorithm, which approximates the principal components iteratively, has also been studied. Both implementations have been compared in terms of accuracy and computation time; then, the parallelization of both models has also been analyzed.<p> </p> These comparisons show promising results in terms of computation time and parallelization: the performance of the NIPALS-PCA algorithm is clearly better when only the first principal component is achieved, while the partitioning of the algorithm execution over several cores shows an important speedup for the PCA-Jacobi. Thus, experimental results show the potential of RVC–CAL to automatically generate implementations which process in real-time the large volumes of information of hyperspectral sensors, as it provides advanced semantics for exploiting system parallelization.
Hyperspectral Imaging (HI) collects high resolution spectral information consisting of hundreds of bands across the electromagnetic spectrum –from the ultraviolet to the infrared range–. Thanks to this huge amount of information, an identification of the different elements that compound the hyperspectral image is feasible. Initially, HI was developed for remote sensing applications and, nowadays, its use has been spread to research fields such as security and medicine. In all of them, new applications that demand the specific requirement of real-time processing have appear. In order to fulfill this requirement, the intrinsic parallelism of the algorithms needs to be explicitly exploited.<p> </p> In this paper, a Support Vector Machine (SVM) classifier with a linear kernel has been implemented using a dataflow language called RVC-CAL. Specifically, RVC-CAL allows the scheduling of functional actors onto the target platform cores. Once the parallelism of the classifier has been extracted, a comparison of the SVM classifier implementation using LibSVM –a specific library for SVM applications– and RVC-CAL has been performed.<p> </p> The speedup results obtained for the image classifier depends on the number of blocks in which the image is divided; concretely, when 3 image blocks are processed in parallel, an average speed up above 2.50, with regard to the RVC-CAL sequential version, is achieved.
Hyperspectral imaging (HI) collects information from across the electromagnetic spectrum, covering a wide range of wavelengths. Although this technology was initially developed for remote sensing and earth observation, its multiple advantages - such as high spectral resolution - led to its application in other fields, as cancer detection. However, this new field has shown specific requirements; for instance, it needs to accomplish strong time specifications, since all the potential applications - like surgical guidance or in vivo tumor detection - imply real-time requisites. Achieving this time requirements is a great challenge, as hyperspectral images generate extremely high volumes of data to process. Thus, some new research lines are studying new processing techniques, and the most relevant ones are related to system parallelization. <p> </p>In that line, this paper describes the construction of a new hyperspectral processing library for RVC–CAL language, which is specifically designed for multimedia applications and allows multithreading compilation and system parallelization. This paper presents the development of the required library functions to implement two of the four stages of the hyperspectral imaging processing chain--endmember and abundances estimation. The results obtained show that the library achieves speedups of 30%, approximately, comparing to an existing software of hyperspectral images analysis; concretely, the endmember estimation step reaches an average speedup of 27.6%, which saves almost 8 seconds in the execution time. It also shows the existence of some bottlenecks, as the communication interfaces among the different actors due to the volume of data to transfer. Finally, it is shown that the library considerably simplifies the implementation process. Thus, experimental results show the potential of a RVC–CAL library for analyzing hyperspectral images in real-time, as it provides enough resources to study the system performance.
Hyperspectral Imaging (HI) collects high resolution spectral information consisting of hundred of bands raging from the infrared to the ultraviolet wave lengths. In the medical field, specifically, in the cancer tissue identification at the operating room, the potential of HI is huge. However, given the data volume of HI and the computational complexity and cost of identification algorithms, real-time processing is the key, differential feature that brings value to surgeons. In order to achieve real-time implementations, the parallelism available in a specification needs to be explicitly highlighted. Data-flow programming languages, like RVC-CAL, are able to accomplish this goal. <p> </p>In this paper, an RVC-CAL library to implement dimensionality reduction and endmember extraction is presented. The results obtained show significant improvements with regard to a state-of-the-art analysis tool. A speedup of 30% is carried out using the complete processing chain and, in particular, a speedup of 5% has been achieved in the dimensionality reduction step. This dimensionality reduction takes ten of the thirteen seconds that the whole system needs to analyze one of the images. In addition, the RVC-CAL library is an excellent tool to simplify the implementation process of HI algorithms. Effectively, during the experimental test, the potential of the RVC-CAL library to reveal possible bottlenecks present in the HI processing chain and, therefore, to improve the system performance to achieve real-time constraints has been shown. Furthermore, the RVC-CAL library provides the possibility of system performance testing.
System-level energy optimization of battery-powered multimedia embedded systems has recently become a design goal.
The poor operational time of multimedia terminals makes computationally demanding applications impractical in real
scenarios. For instance, the so-called smart-phones are currently unable to remain in operation longer than several hours.
The OMAP3530 processor basically consists of two processing cores, a General Purpose Processor (GPP) and a Digital
Signal Processor (DSP). The former, an ARM Cortex-A8 processor, is aimed to run a generic Operating System (OS)
while the latter, a DSP core based on the C64x+, has architecture optimized for video processing.
The BeagleBoard, a commercial prototyping board based on the OMAP processor, has been used to test the Android
Operating System and measure its performance. The board has 128 MB of SDRAM external memory, 256 MB of Flash
external memory and several interfaces. Note that the clock frequency of the ARM and DSP OMAP cores is 600 MHz
and 430 MHz, respectively.
This paper describes the energy consumption estimation of the processes and multimedia applications of an Android v1.6
(Donut) OS on the OMAP3530-Based BeagleBoard. In addition, tools to communicate the two processing cores have
been employed. A test-bench to profile the OS resource usage has been developed.
As far as the energy estimates concern, the OMAP processor energy consumption model provided by the manufacturer
has been used. The model is basically divided in two energy components. The former, the baseline core energy,
describes the energy consumption that is independent of any chip activity. The latter, the module active energy, describes
the energy consumed by the active modules depending on resource usage.
Media synchronization at network context minimizes the effects of the network jitter and the skew between the emitter and receiver clocks. Theoretical algorithms cannot always be implemented on real
systems for the architecture differences between a real and a theoretical system. In this paper an implementation for an intra-medium and an inter-media synchronization algorithm for a real multistandard IP set-top box is presented. For intra-medium synchronization, the proposed technique is based on controlling the receiver buffer. However for inter-media synchronization, the proposed technique is based on controlling the video playback according the Presentation Time Stamp (PTS) of the media units
(audio and video). The proposed synchronizations algorithms has been integrated in an IP-STB and tested in a real environment using DVD movies and TV channels with excellent results. Those results show that
the proposed algorithm can achieve media synchronization and meet the requirements of perceived quality of service (P-QoS).
In this paper, a system that emulates the whole DAB transmission chain, allowing the development and test of external
decoders for DAB data services, is described. DAB receivers offer the possibility of connecting an external data decoder
that handles additional data services, using a data interface called RDI. The system described in this paper replaces the
complete DAB transmission chain from the transmitter to the RDI interface of the receiver. The system generates a DAB
ensemble that can carry several data services and transmits the RDI frames corresponding to this ensemble through an
RDI output. Any type of data service can be carried by the ensemble. The purpose of the system is to be used as a debug
and verification tool for external decoder equipment that can be connected to a DAB receiver via an RDI interface. The
system has been tested with two kinds of data services -data carousels and video streaming- with very satisfactory
results in both cases. We are working currently on adding DMB support to our system.
Internet Protocol Set-Top Boxes (IP STBs) based on single-processor architectures have been recently introduced in the
market. In this paper, the implementation of an MPEG-4 SP/ASP video decoder for a multi-format IP STB based on a
TMS320DM641 DSP is presented. An initial decoder for PC platform was fully tested and ported to the DSP. Using this
code an optimization process was started achieving a 90% speedup. This process allows real-time MPEG-4 SP/ASP
decoding. The MPEG-4 decoder has been integrated in an IP STB and tested in a real environment using DVD movies
and TV channels with excellent results.