PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 9497, including the Title Page, Copyright information, Table of Contents, Introduction (if any), Authors, and Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image color is an important property that contains essential information that can be utilized for conducting accurate image analyses as well as for searching and retrieving images from databases. The colors of an image may be distorted during image acquisition, transmission, and display due to a variety of factors including environmental conditions. Developing an effective and quantitative metric for evaluating the color quality of an image that agrees with human observers is challenging, yet essential for computer vision and autonomous imaging systems. The traditional colorfulness measures are not robust to noise and fail to distinguish different color tones. In this paper, a new nonreference color quality measure CQE, that combines a colorfulness measure and a Uni-Color Differentiation term is presented. This CQE is shown to satisfy the established properties of a good measure, namely: The CQE correlates well with the human perception, which means that the measure can evaluate the quality of images accurately compared to the human observer’s evaluation; The measure is robust to noise and distortions so that it can provide consistent and reliable measure values for a wide range of images; The measure is computationally efficient and can be used in real time applications. The experimental results demonstrate the effectiveness of the CQE measure in evaluating image color qualities for a variety of test images subjected to different environmental conditions, as well as showing its applicability for fast image retrieval for synthetic patches and natural images. Conducting image retrieval by simply searching for the value of the image’s CQE measure is fast, easy to implement, and invariant to image orientations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Data fusion can be used to generate high quality data from multiple, degraded data sets by appropriately extracting and combining “good” information from each degraded set. In particular for image fusion, it may be used for image denoising, deblurring, or pixel dropout compensation. Image fusion is often performed in an image transform domain. In transform domain fusion approaches, transform coefficients from multiple images may be combined in various ways to produce an improved transform coefficient set. The fused transform data is inverted to produce the fused image. In this paper we formulate a general approach to image fusion in the wavelet domain. The proposed approach exploits context information, through application of nonparametric statistical hypothesis testing. The use of statistical hypothesis testing places the fusion on a theoretically sound and principled basis, and leads to improved fusion performance. Furthermore, use of statistical wavelet coefficient information in a neighborhood of the test coefficient more fully exploits the available context information. In this paper we first formulate the fusion approach. We then present numerical image data fusion results using a sampling of imagery from a public domain image database. We compare fusion performance of the proposed approach with performance of other standard wavelet-domain fusion approaches, and show a performance improvement when using the proposed approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Most recent image deblurring methods only use valid information found in input image as the clue to fill the deblurring region. These methods usually have the defects of insufficient prior information and relatively poor adaptiveness. Patch-based method not only uses the valid information of the input image itself, but also utilizes the prior information of the sample images to improve the adaptiveness. However the cost function of this method is quite time-consuming and the method may also produce ringing artifacts. In this paper, we propose an improved non-blind deblurring algorithm based on learning patch likelihoods. On one hand, we consider the effect of the Gaussian mixture model with different weights and normalize the weight values, which can optimize the cost function and reduce running time. On the other hand, a post processing method is proposed to solve the ringing artifacts produced by traditional patch-based method. Extensive experiments are performed. Experimental results verify that our method can effectively reduce the execution time, suppress the ringing artifacts effectively, and keep the quality of deblurred image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Object tracking in wide area motion imagery is a complex problem that consists of object detection and target tracking over time. This challenge can be solved by human analysts who naturally have the ability to keep track of an object in a scene. A computer vision solution for object tracking has the potential to be a much faster and efficient solution. However, a computer vision solution faces certain challenges that do not affect a human analyst. To overcome these challenges, a tracking process is proposed that is inspired by the known advantages of a human analyst. First, the focus of a human analyst is emulated by doing processing only the local object search area. Second, it is proposed that an intensity enhancement process should be done on the local area to allow features to be detected in poor lighting conditions. This simulates the ability of the human eye to discern objects in complex lighting conditions. Third, it is proposed that the spatial resolution of the local search area is increased to extract better features and provide more accurate feature matching. A quantitative evaluation is performed to show tracking improvement using the proposed method. The three databases, each grayscale sequences that were obtained from aircrafts, used for these evaluations include the Columbus Large Image Format database, the Large Area Image Recorder database, and the Sussex database.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper an image inpainting approach based on the construction of a composite curve for the restoration of the edges of objects in an image using the concepts of parametric and geometric continuity is presented. It is shown that this approach allows to restore the curved edges and provide more flexibility for curve design in damaged image by interpolating the boundaries of objects by cubic splines. After edge restoration stage, a texture restoration using 2D autoregressive texture model is carried out. The image intensity is locally modeled by a first spatial autoregressive model with support in a strongly causal prediction region on the plane. Model parameters are estimated by Yule-Walker method. Several examples considered in this paper show the effectiveness of the proposed approach for large objects removal as well as recovery of small regions on several test images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The aim of this paper is to give an overview of recent research, development and civil application of remotely
piloted aircraft systems (RPAS) in Europe. It describes a European strategy for the development of civil
applications of Remotely Piloted Aircraft Systems (RPAS) and reflects most of the contents of the European
staff working document SWD(2012) 259 final.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a cluster-driven trilateral filter for speckle reduction in ultrasound images. In addition to operating in the spatial dimension and the intensity dimension, the proposed filter also merges clustering information simultaneously. We compare the proposed filter with a normalized bilateral filter for speckle reduction using real 3-D ultrasound images. Our experimental results indicate that the cluster-driven trilateral filter exhibits better performance for speckle reduction and edge feature preservation than the normalized bilateral filter. In addition, we investigate the graphic processing unit (GPU) technique and apply it to the proposed 3-D filter. We design and test a GPU framework and compare it with a single-core CPU framework. Our experimental results show that the GPU-accelerated trilateral filter can obtain a roughly 20-fold increase in speed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multiframe super-resolution uses the information from a set of low resolution images to produce a high resolution output image. This process can prospectively be run several times with interim sets of enhanced images that can be further enhanced. This paper presents and discusses the results of this hierarchical technique on a sreated set of images and the results produced. When successful, this approach is able to produce a better quality image than the traditional single run super resolution approach. This provides a method for existing super resolution algorithms to further enhance image quality without modifying the underlying algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Nowadays, the interest in real-time services, like audio and video, is growing. These services are mostly transmitted over packet networks, which are based on IP protocol. It leads to analyses of these services and their behavior in such networks which are becoming more frequent. Video has become the significant part of all data traffic sent via IP networks. In general, a video service is one-way service (except e.g. video calls) and network delay is not such an important factor as in a voice service. Dominant network factors that influence the final video quality are especially packet loss, delay variation and the capacity of the transmission links. Analysis of video quality concentrates on the resistance of video codecs to packet loss in the network, which causes artefacts in the video. IPsec provides confidentiality in terms of safety, integrity and non-repudiation (using HMAC-SHA1 and 3DES encryption for confidentiality and AES in CBC mode) with an authentication header and ESP (Encapsulating Security Payload). The paper brings a detailed view of the performance of video streaming over an IP-based network. We compared quality of video with packet loss and encryption as well. The measured results demonstrated the relation between the video codec type and bitrate to the final video quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An intelligent transportation system (ITS) is one typical cyber-physical system (CPS) that aims to provide efficient,
effective, reliable, and safe driving experiences with minimal congestion and effective traffic flow management. In order
to achieve these goals, various ITS technologies need to work synergistically. Nonetheless, ITS’s reliance on wireless
connectivity makes it vulnerable to cyber threats. Thus, it is critical to understand the impact of cyber threats on ITS. In
this paper, using real-world transportation dataset, we evaluated the consequences of cyber threats – attacks against service
availability by jamming the communication channel of ITS. In this way, we can have a better understanding of the
importance of ensuring adequate security respecting safety and life-critical ITS applications before full and expensive real-world
deployments. Our experimental data shows that cyber threats against service availability could adversely affect
traffic efficiency and safety performances evidenced by exacerbated travel time, fuel consumed, and other evaluated
performance metrics as the communication network is compromised. Finally, we discuss a framework to make ITS secure
and more resilient against cyber threats.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper applies blind source separation or independent component analysis for images that may contain mixtures of
text, audio, or other images for steganography purposes. The paper focuses on separating mixtures in the transform
domain such as Fourier domain or the Wavelet domain. The study addresses the effectiveness of steganography when
using linear mixtures of multimedia components and the ability of standard blind sources separation techniques to
discern hidden multimedia messages. Mixing in the space, frequency, and wavelet (scale) domains is compared.
Effectiveness is measured using mean square error rate between original and recovered images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Wireless networks are now ubiquitous across the tactical environment. They offer unprecedented communications
and data access capabilities. However, providing information security to wireless transmissions without impacting
performance is a challenge. The information security requirement for each operational scenario presents a large
trade space for functionality versus performance. One aspect of this trade space pertains to where information
security services are integrated into the protocol stack. This paper will present an overview of the various options
that exist and will discuss the advantages and disadvantages of each option.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes a modification to the Extended Pairs of Values (EPoV) method of 2LSB steganalysis in digital still images. In EPoV, the detection and the estimation of the hidden message length were performed in two separate processes as it considered the automated detection. However, the new proposed method uses the standard deviation of the EPoV to measure the amount of distortion in the stego image made by the embedding process using 2LSB replacement, which is directly proportional with the embedding rate. It is shown that it can accurately estimate the length of the hidden message and outperform the other methods of the targeted 2LSB steganalysis in the literature. The proposed method is also more consistent with the steganalysis methods in the literature by giving the amount of difference to the expected clean image. According to the experimental results, based on analysing 3000 nevercompressed images, the proposed method is more accurate than the current targeted 2LSB steganalysis methods for low embedding rates.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We consider the problem of single snapshot direction-of-arrival (DOA) estimation of multiple targets in monostatic multiple-input multiple-output (MIMO) radar. When only a single snapshot is used, the sample covariance matrix of the data becomes non-invertible and, therefore, does not permit application of Capon-based DOA estimation techniques. On the other hand, low-resolution techniques, such as the conventional beamformer, suffer from biased estimation and fail to resolve closely spaced sources. In this paper, we propose a new Capon-based method for DOA estimation in MIMO radar using a single radar pulse. Assuming that the angular locations of the sources are known a priori to be located within a certain spatial sector, we employ multiple transmit beams to focus the transmit energy of multiple orthogonal waveforms within the desired sector. The transmit weight vectors are carefully designed such that they have the same transmit power distribution pattern. As compared to the standard MIMO radar, the proposed approach enables transmitting an arbitrary number of orthogonal waveforms. By using matched-filtering at the receiver, the data associated with each beam is extracted yielding a virtual data snapshot. The total number of virtual snapshots is equal to the number of transmit beams. By choosing the number of transmit beams to be larger than the number of receive elements, it becomes possible to form a full-rank sample covariance matrix. The Capon beamformer is then applied to estimate the DOAs of the targets of interest. The proposed method is shown to have improved DOA estimation performance as compared to conventional single-snapshot DOA estimation methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Radio frequency identification (RFID) systems present an incredibly cost-effective and easy-to-implement solution to
close-range localization. One of the important applications of a passive RFID system is to determine the reader position
through multilateration based on the estimated distances between the reader and multiple distributed reference tags
obtained from, e.g., the received signal strength indicator (RSSI) readings. In practice, the achievable accuracy of
passive RFID reader localization suffers from many factors, such as the distorted RSSI reading due to channel
impairments in terms of the susceptibility to reader antenna patterns and multipath propagation. Previous studies have
shown that the accuracy of passive RFID localization can be significantly improved by properly modeling and
compensating for such channel impairments. The objective of this paper is to report experimental study results that
validate the effectiveness of such approaches for high-accuracy RFID localization. We also examine a number of
practical issues arising in the underlying problem that limit the accuracy of reader-tag distance measurements and,
therefore, the estimated reader localization. These issues include the variations in tag radiation characteristics for similar
tags, effects of tag orientations, and reader RSS quantization and measurement errors. As such, this paper reveals
valuable insights of the issues and solutions toward achieving high-accuracy passive RFID localization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ad-hoc networks of omni-directional sensors provide an efficient means to obtain low-cost, easily deployed, reliable
target tracking systems. To remove target position dependency on the target power, a transformation to another
coordinate system is introduced. It can be shown that the problem of sensing target position with omni-directional
sensors can be adapted to the conventional Kalman filter framework. To validate the proposed methodology, first an
analysis is conducted to show that by converting to log-ratio space and at the same time reducing the number of
parameters to track, no information about target position is lost. The analysis is done by deriving the CRLBs for the
position estimation error in both original and transformed spaces and showing that they are the same. Second, to show
how the traditional Kalman filter framework performs, a particle filter that works off the transformed coordinates is
designed. The number of particles is selected to be sufficiently large and the result is used as ground truth to compare
with the performance of the Kalman tracker. The comparisons are done for different target movement speeds and sensor
density modes. The results provide an insight into Kalman tracker performance in different situations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The coprime sampling scheme allows signal frequency estimation through two sub-Nyquist samplers where the down-sampling rates M and N are coprime integers. By considering the difference set of this pair of O(M + N ) physical samples, O(MN ) consecutive virtual samples can be generated. In this paper, a generalized coprime sampling technique is proposed by using O(M + pN ) samples to generate O(pMN ) virtual samples, where p is an integer argument. As such, the existing coprime sampling techniques are represented as a special case of a much broader and generalized scheme. The analytical expressions of the number of virtual samples, frequency resolution and the corresponding latency time are derived. The effectiveness of the proposed technique is verified using simulation results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Vehicle-to-X (V2X) ( vehicle-to-vehicle [V2V], and vehicle-to-infrastructure [V2I]) communication, used in intelligent
transportation system (ITS)/vehicular ad hoc networks (VANETs), promises improved traffic efficiency, road safety, and
provision of infotainment services, etc. However, the levels of these improvements have not been clearly researched and
documented especially in realistic environments [2]. Consequently, using field and simulation data, we investigate the
safety and traffic efficiency application benefits of V2V communication applications in a realistic scenario. In order to do
this, we built a real-world simulation test-bed using real-world/field traffic data of the Maryland (MD)/Washington DC
and Virginia (VA) area from July 2012 to December 2012. In addition, we developed an application called incident warning
application (IWA) of which IWA-equipped vehicles make use of it to bypass a compound road accident, slippery roadway
caused by ice, and reduced visibility as a result of fog; unequipped/classic vehicles are unaware of this and hence suffer
adverse effects. On the average, our results show that, indeed, tangible benefits/improvements with respect to travel time
(126.78%), average speed (56.12%), fuel consumption (8.05%), CO2 emissions (8.05%) together with other evaluated
performance metrics are derivable from V2V communication especially at specific IWA-equipped vehicles penetration
rates.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Human emotion recognition from speech is studied frequently for its importance in many applications, e.g. human-computer
interaction. There is a wide diversity and non-agreement about the basic emotion or emotion-related states on
one hand and about where the emotion related information lies in the speech signal on the other side. These diversities
motivate our investigations into extracting Meta-features using the PCA approach, or using a non-adaptive random
projection RP, which significantly reduce the large dimensional speech feature vectors that may contain a wide range of
emotion related information. Subsets of Meta-features are fused to increase the performance of the recognition model
that adopts the score-based LDC classifier. We shall demonstrate that our scheme outperform the state of the art results
when tested on non-prompted databases or acted databases (i.e. when subjects act specific emotions while uttering a
sentence). However, the huge gap between accuracy rates achieved on the different types of datasets of speech raises
questions about the way emotions modulate the speech. In particular we shall argue that emotion recognition from
speech should not be dealt with as a classification problem. We shall demonstrate the presence of a spectrum of different
emotions in the same speech portion especially in the non-prompted data sets, which tends to be more “natural” than the
acted datasets where the subjects attempt to suppress all but one emotion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Universal cybernetics is the study of control and communications in living and non-living systems. In this paper the universal cybernetics duality principle (UCDP), first identified in control theory in 1978 and expressing a cybernetic duality behavior for our universe, is reviewed. The review is given on the heels of major prizes given to physicists for their use of mathematical dualities in solving intractable problems in physics such as those of cosmology’s ‘dark energy’, an area that according to a recent New York Times article has become “a cottage industry in physics today”. These dualities are not unlike those of our UCDP that are further enhanced with physical dualities. For instance, in 2008 the UCDP guided us to the derivation of the laws of retention in physics as the space-penalty dual of the laws of motion in physics, including the dark energy thought responsible for the observed increase of the volume of our Universe as it ages. The UCDP has also guided us to the discovery of significant results in other fields such as: 1) in matched processors for quantized control with applications in the modeling of central nervous system (CNS) control mechanisms; 2) in radar designs where the discovery of latency theory, the time-penalty dual of information-theory, has led us to high-performance radar solutions that evade the use of ‘big data’ in the form of SAR imagery of the earth; and 3) in unveiling biological lifespan bounds where the life-expectancy of an organism is sensibly predicted through lingerdynamics, the identified time-penalty dual of thermodynamics, which relates its adult lifespan to either: a. the ratio of its body size to its nutritional consumption rate; or b. its specific heat-capacity; or c. the ratio of its nutritional consumption rate energy to its entropic volume energy, a type of dark energy that is consistent with the observed decrease in the mass density of the organism as it ages.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Packet loss occurs in real-time voice transmission over wireless broadcast Ad-hoc network which creates
disruptions in sound. Basic objective of this research is to design a wireless Ad-hoc network based on two Android
devices by using the Wireless Fidelity (WIFI) Direct Application Programming Interface (API) and apply the
Network Codec, Reed Solomon Code. The network codec is used to encode the data of a music wav file and recover
the lost packets if any, packets are dropped using a loss module at the transmitter device to analyze the performance
with the objective of retrieving the original file at the receiver device using the network codec. This resulted in faster
transmission of the files despite dropped packets. In the end both files had the original formatted music files with
complete performance analysis based on the transmission delay.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents the results of using a constant modulus algorithm (CMA) to recover shaped offset quadrature-phase
shift keying (SOQPSK)-TG modulated data, which has been transmitted using the iNET data packet structure. This
standard is defined and used for aeronautical telemetry. Based on the iNET-packet structure, the adaptive block
processing CMA equalizer can be initialized using the minimum mean square error (MMSE) equalizer [3]. This CMA
equalizer is being evaluated for use on iNET structured data, with initial tests being conducted on measured data which
has been received in a controlled laboratory environment. Thus the CMA equalizer is applied at the receiver to data
packets which have been experimentally generated in order to determine the feasibility of our equalization approach, and
its performance is compared to that of the MMSE equalizer. Performance evaluation is based on computed bit error rate
(BER) counts for these equalizers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we are address two issues regarding cognitive radio spectrum sensing. Spectrum sensing for cognitive radio
has been extensively studied in recent past and multiple techniques have been proposed. One such technique is entropy
based detection. In entropy based detection we measure the entropy of the received signal after converting it to
frequency domain. The logic is that in frequency domain, the entropy of noise (assuming its AWGN) is higher than the
signal, thereby enabling us to segment noise from signal by using entropy based threshold. This approach however
makes some assumptions which may not be valid. It assumes at a time only one of the two( signal / noise) is present. It
further assumes that a given test segment is either a signal or a noise segment. The length of the segment in such a
scenario would be fixed /known. These assumptions may be too constraining and we propose alternate method to
address the above issues. We use a filtering technique in form of Independent Component Analysis to segment the signal
and further use additional techniques like energy weight-age to weigh the components to estimate the signal strength.
We test our proposed method for a variety of signals include image, audio and sinusoidal signals. Results show the
improvement in performance as well as the availability of new measures as generated from our proposed technique.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This work aimed to study computationally simple method of saliency map calculation. Research in this field received increasing interest for the use of complex techniques in portable devices. A saliency map allows increasing the speed of many subsequent algorithms and reducing the computational complexity. The proposed method of saliency map detection based on both image and frequency space analysis. Several examples of test image from the Kodak dataset with different detalisation considered in this paper demonstrate the effectiveness of the proposed approach. We present experiments which show that the proposed method providing better results than the framework Salience Toolbox in terms of accuracy and speed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present a novel method for encrypting and decrypting large amounts of data such as two-dimensional (2-D) images, both gray-scale and color, without the loss of information, and using private keys of varying lengths. The proposed method is based on the concept of the tensor representation of an image and splitting the 2-D discrete Fourier transform (DFT) by one-dimensional (1-D) DFTs of signals from the tensor representation, or transform. The splitting of the transform is accomplished in a three-dimensional (3-D) space, namely on the 3-D lattice placed on the torus. Each splitting-signal of the image defines the 2-D DFT along the frequency-points located on the spirals on the torus. Spirals have different form and cover the lattice on the torus in a complex form, which makes them very effective when moving data through and between the spirals, and data along the spirals. The encryption consists of several iterative applications of mapping the 3-D torus into several ones of smaller sizes, and rotates then moves the data around the spirals on all tori. The encryption results in the image which is uncorrelated. The decryption algorithm uses the encrypted data, and processes them in inverse order with an identical number of iterations. The proposed method can be extended to encrypt and decrypt documents as well as other types of digital media. Simulation results of the purposed method are presented to show the performance for image encryption.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Switched diversity is a solution to random attenuation of a signal due to fading channel distortion. It uses 2 or more
identical branches, or antennas, to monitor the received signal. The main assumption is that the branches are
uncorrelated. Switched diversity uses a switching threshold that is the criterion for an acceptable received signal path. In
a traditional switched diversity scheme, if no acceptable path meets the threshold criterion, one is chosen randomly. We
will look into a modified hybrid switched/selection scheme proposed in a separate paper by Yang and Alouini. This
scheme will help the receiver pick the best path in conditions where all of the channel conditions are poor. The proposed
scheme, as proposed by Yang and Alouini, is called switch and examine combining with post-examining selection
(SECps), which is a variant of switch and examine combining (SEC). We take the theoretical findings from Yang and
Alouini’s paper and use MATLAB to validate the simulation against the closed form equations for DBPSK and Noncoherent
demodulated FSK.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Two-dimensional (2D) transmit beamforming aims at focusing the transmitted energy within certain desired sector while minimizing the amount of energy in the out-of-sector regions. In this paper, we propose parsimonious formulations to the sidelobe control problem in 2D transmit beamforming with multidimensional arrays. The out-of-sector region is partitioned into a small number of subsectors where the subspace spanned by the steering vectors associated with the spatial directions within a certain subsecetor is approximated by the effective discrete- prolate spheroidal sequences associated with that subsector. Then, the sidelobe control is achieved by imposing constraints on the magnitude of the inner product between the 2D transmit beamforming weight vector and the discrete-prolate spheroidal sequences. Simulations examples are presented which show the effectiveness of the proposed formulations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Efficiency in terms of both accuracy and speed is highly important in any system, especially when it comes to
image processing. The purpose of this paper is to improve an existing implementation of multi-scale retinex
(MSR) by utilizing the fast Fourier transforms (FFT) within the illumination estimation step of the algorithm
to improve the speed at which Gaussian blurring filters were applied to the original input image. In addition,
alpha-rooting can be used as a separate technique to achieve a sharper image in order to fuse its results with
those of the retinex algorithm for the sake of achieving the best image possible as shown by the values of the
considered color image enhancement measure (EMEC).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a novel inpainting approach for removing marked dynamic objects from videos captured with a camera, so long as the objects occlude parts of the scene with a static background. Proposed approach allow to remove objects or restore missing or tainted regions present in a video sequence by utilizing spatial and temporal information from neighboring scenes. The algorithm iteratively performs following operations: achieve frame; update the scene model; update positions of moving objects; replace parts of the frame occupied by the objects marked for remove with use of a background model. In this paper, we extend an image inpainting algorithm based texture and structure reconstruction by incorporating an improved strategy for video. An image inpainting approach based on the construction of a composite curve for the restoration of the edges of objects in a frame using the concepts of parametric and geometric continuity is presented. It is shown that this approach allows to restore the curved edges and provide more flexibility for curve design in damaged frame by interpolating the boundaries of objects by cubic splines. After edge restoration stage, a texture reconstruction using patch-based method is carried out. We demonstrate the performance of a new approach via several examples, showing the effectiveness of our algorithm and compared with state-of-the-art video inpainting methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we present a method for the functional analysis of human heart based on electrocardiography (ECG) signals. The approach using the apparatus of analytical and differential geometry and correlation and regression analysis. ECG contains information on the current condition of the cardiovascular system as well as on the pathological changes in the heart. Mathematical processing of the heart rate variability allows to obtain a great set of mathematical and statistical characteristics. These characteristics of the heart rate are used when solving research problems to study physiological changes that determine functional changes of an individual. The proposed method implemented for up-to-date mobile Android and iOS based devices.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.