PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This paper compares several designs of spanning tree adders for 16 and 32 bit widths. The carry select part of the
spanning tree is done using ripple carry and carry skip adders (4, 8 and 16 bits) and compared in terms of delay,
complexity and power consumption. The spanning tree design is also compared with that of a conventional carry
lookahead adder. All the designs are done using only 2 input NAND and NOR gates and inverters in 0.18 μm CMOS
technology. The delay and power consumption is determined by use of simulations performed with Synopsys and
Cadence design tools. The spanning tree adder realized with carry skip adders is about 40% faster than the carry
lookahead adder with an approximate increase of 17% in complexity and 22% in power.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Low density parity check decoders use computation nodes with multioperand adders on their critical path. This
paper describes the design of estimating multioperand adders to reduce the latency, power and area of these
nodes. The new estimating adders occasionally produce inaccurate results. The effect of these errors and the
subsequent trade-off between latency and decoder frame error rate is examined. For the decoder investigated it
is found that the estimating adders do not degrade the frame error rate.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Continuous Valued Number System (CVNS) is a novel analog digit number system which employs bit level analog residue arithmetic. The information redundancy among the digits, makes it easy to perform the required binary operations in higher radices, and reduces the implementation area and the number of required interconnections. CVNS theory can open up a new approach for performing digital arithmetic with simple and elementary analog elements, such as current comparators and current mirrors, and with arbitrary precision. In this paper we discuss the design of 16-bit radix-4 CVNS adder with controlled precision, and a two operand binary adder designed, in TSMC CMOS 0.18μm technology, is used to illustrate the techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Residue systems of representation, like Residue Number Systems (RNS) for primary field(GF(p)) or Trinomial Residue Arithmetic for binary field (GF(2k)), are characterized by efficient multiplication and costly modular reduction. On the other hand, conventional representations allow in some cases very efficient reductions but require costly multiplications. The main purpose of this paper is to analyze the complexity of those two different approaches in the summations of products. As a matter of fact, the complexities of the reduction in residue systems and of the multiplication in classical representations are similar. One of the main features of this reduction is that it doesn't depend on the field. Moreover, the cost of multiplication in residue systems is equivalent to the cost of reduction in classical representations for special well-chosen fields. Taking those properties into account, we remark that an expression like A * B + C * D, which requires two products, one addition and one reduction, evaluates faster in a residue system than in a classical one. So we propose to study types of expressions to offer a guide for choosing a most appropriate representation. One of the best domain of application is the Elliptic Curves Cryptography where addition and doubling points formulas are composed of products summation. The different kinds of coordinates like affine, projective, and Jacobean, offer a good choice of expressions for our study.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper explores the use of a double-base number system (DBNS) in constant integer multiplication. The
DBNS recoding technique represents constants in a multiple-radix way in the hopes of minimizing computation
during constant multiplication. The paper presents a proof to show that multiple-radix representation diminishes
the number of additions in a sublinear way. We prove Lefevre's conjecture that the multiplication by an integer
constant is achievable in sublinear time. The proof is based on some interesting properties of the double-base
number system. The paper provides numerical data showcasing some of the most recent results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Quantum-Dot Cellular Automata (QCA) is one of several proposed computational nanotechnology paradigms
that are being investigated as alternatives to CMOS at the nano-scale. QCA has been reported to offer relatively
low power consumption, and very high device density. In recent years, several researchers have started investigating
relatively complex circuit architectures using QCA. Such design efforts have highlighted the crosstalk
problem in QCA and the lack of research in this area. This paper explores the nature of crosstalk in QCA. We
show how crosstalk can be amplified due to several parameters including wire length and the distance between
adjacent cells. We develop a model and method that allows us to test for crosstalk using a set of test vectors.
We also propose a set of cell placement guidelines and design geometries that help to minimize QCA crosstalk
in large circuits.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this work we present some improvements on hardware operators dedicated to the computation of power
operations with fixed integer exponent (x3, x4, . . .) in unsigned radix-2 fixed-point or integer representations.
The proposed method reduces the number of partial products using simplifications based on new identities and
transformations. These simplifications are performed both at the logical and the arithmetic levels. The proposed
method has been implemented in a VHDL generator that produces synthesizable descriptions of optimized
operators. The results of our method have been demonstrated on FPGA circuits.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we propose an interconnection scheme to compute any unfactored arithmetic expression as a network of online modules. This is accomplished through mapping the expression to a doubly-linked hypercube network of online units. The mapping algorithm guarantees a maximum dilation of 2, with unit load, and conjectures that any arbitrary unfactored expression can be mapped to the proposed architecture with a small delay overhead. The proposed architecture requires no form of reconfiguration to accomplish the mapping, providing us with an efficient way to compute any network of online operations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Modular multiplication is the core of most Public Key Cryptosystems and therefore its implementation plays a crucial role in the overall efficiency of asymmetric cryptosystems. Hardware approaches provide advantages over software in the framework of efficient dedicated accelerators. The concerns of the designers are mainly the die size, frequency, latency (throughput) and power consumption of those solutions. We show in this paper how Booth recoding, pipelining, Montgomery modular multiplication and carry save adders offer an attractive solution for hardware modular multiplication. Although most of the hereafter techniques stand as state-of-the-art, the combination described here is unique and particularly efficient in the context of constrained hardware design of XTR cryptosystem. Our solution is implemented on an FPGA platform and compared with previous results. The area-time ratio is improved by around a factor of 3.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The solution of large eigensystems has numerous applications in engineering and science, including circuit simulation,
mechanical structure stability, and quantum physics. In particular, many optics and photonics applications, such as the
design of photonic crystal slab devices, dispersion engineering, and other iterative-based design techniques, require an
eigenvalue solver. Unfortunately, brute force solutions exhibit a computational complexity of O(n3), rendering them
entirely impractical for medium to large matrices. Although techniques have been developed to reduce this complexity
to O(n2), these algorithms are restricted to special cases such as real, symmetric, or sparse matrices, limiting the
applicability of these solutions. Thus, there is a clear need for a high-performance eigenvalue solver for large, non-hermitian
matrices. To this end, we are developing a novel, hardware-based platform for the analysis of eigenvalue
problems. In this paper, we describe this platform and its application to eigenvalue problems, as well as our progress to
date.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Artificial neural networks have been used in applications that require complex procedural algorithms and in systems which lack an analytical mathematic model. By designing a large network of computing nodes based on the artificial neuron model, new solutions can be developed for computational problems in fields such as image processing and speech recognition. Neural networks are inherently parallel since each neuron, or node, acts as an autonomous computational element. Artificial neural networks use a mathematical model for each node that processes information from other nodes in the same region. The information processing entails computing a weighted average computation followed by a nonlinear mathematical transformation. Some typical artificial neural network applications use the exponential function or trigonometric functions for the nonlinear transformation. Various simple artificial neural networks have been implemented using a processor to compute the output for each node sequentially. This approach uses sequential processing and does not take advantage of the parallelism of a complex artificial neural network. In this work a hardware-based approach is investigated for artificial neural network applications. A Field Programmable Gate Arrays (FPGAs) is used to implement an artificial neuron using hardware multipliers, adders and CORDIC functional units. In order to create a large scale artificial neural network, area efficient hardware units such as CORDIC units are needed. High performance and low cost bit serial CORDIC implementations are presented. Finally, the FPGA resources and the performance of a hardware-based artificial neuron are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In [14], we have proposed two total variation (TV) minimization wavelet models for the problem of filling in missing or damaged wavelet coefficients due to lossy image transmission or communication. The proposed models can have effective and automatic control over geometric features of the inpainted images including sharp edges, even in the presence of substantial loss of wavelet coefficients, including in the low frequencies. In this paper, we investigate a modification of the model for noisy images to further improve the recovery properties by using multi-level parameters in the fitting term. Some new numerical examples are also shown to illustrate the effectiveness of the recovery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We investigate the use of a novel multi-lens imaging system in the context of biometric identification, and more
specifically, for iris recognition. Multi-lenslet cameras offer a number of significant advantages over standard
single-lens camera systems, including thin form-factor and wide angle of view. By using appropriate lenslet spacing
relative to the detector pixel pitch, the resulting ensemble of images implicitly contains subject information
at higher spatial frequencies than those present in a single image. Additionally, a multi-lenslet approach enables
the use of observational diversity, including phase, polarization, neutral density, and wavelength diversities. For
example, post-processing multiple observations taken with differing neutral density filters yields an image having
an extended dynamic range. Our research group has developed several multi-lens camera prototypes for the
investigation of such diversities.
In this paper, we present techniques for computing a high-resolution reconstructed image from an ensemble of
low-resolution images containing sub-pixel level displacements. The quality of a reconstructed image is measured
by computing the Hamming distance between the Daugman4 iris code of a conventional reference iris image,
and the iris code of a corresponding reconstructed image. We present numerical results concerning the effect of
noise and defocus blur in the reconstruction process using simulated data and report preliminary work on the
reconstruction of actual iris data obtained with our camera prototypes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Conclusions about the usefulness of mean-squared error for predicting visual image quality are presented in this paper. A standard imaging model was employed that consisted of: an object, point spread function, and noise. Deconvolved reconstructions were recovered from blurred and noisy measurements formed using this model. Additionally, image reconstructions were regularized by classical Fourier-domain filters. These post-processing steps generated the basic components of mean-squared error: bias and pixel-by-pixel noise variances. Several Fourier domain regularization filters were employed so that a broad range of bias/variance tradeoffs could be analyzed. Results given in this paper show that mean-squared error is a reliable indicator of visual image quality only when the images being compared have approximately equal bias/variance ratios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An Image Transceiver based- Goggle has been under development at the Ben Gurion University and the Holon Institute.
The device , aimed at Low-Vision Aid applications [1], is based on a unique LCOS-CMOS Image Transceiver Device
(ITD), which is capable of combining both functions of imaging and Display in a single chip. The head mounted Goggle
will allow the capture of ambient scenery, performing the necessary image enhancement and processing, as well as its redirection
to the healthy part of the patient's retina.
In this presentation we will report on the modeling of the imaging, Image Perception and discrimination capabilities of
the visually impaired. The first part of the study is based on modeling the spatial frequency response and contrast
sensitivity analyzing the two main cases of central and peripheral vision losses. Studies of the effects of both the Retinal
Eccentricity and illumination-levels on the low vision's spatial frequency response will be described. The second part of
the modeling incorporates the use of an image discrimination model to assess the ability of the visually impaired using
the low vision model outlined above, to discriminate between two nearly-identical images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a novel compression scheme to achieve lossy compression of elevation datasets. Our scheme does not use
predictors in the traditional sense. Our predictors are based on planar dataset segments. We believe that this is a far
better way of expressing context in an elevation dataset since it can capture continuities in different geometries and
allows us to provide an error bound on the output.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Detection of man-made or natural object using hyperspectral sensor has attracted great research interest
recently, because it can detect both the full pixel and subpixel objects by analyzing the fine details of the
object as well as the background signatures. Several algorithms have been proposed in the literature to
detect hyperspectral full pixel object and subpixel object. The objective of this paper is to develop an
automated method to detect hyperspectral objects using the linear mixing model (LMM). Here the
background is estimated from the endmember signatures using the principal component analysis (PCA) and
the vertex component analysis (VCA), which is a fast algorithm to unmix hyperspectral data. Sensor noise
is modeled as a Gaussian random vector with uncorrelated components of equal variance. A detail
theoretical analysis of the proposed subpixel object detection algorithm has been provided using the
generalized likelihood ratio test and the LMM approaches. For multipixel or resolved objects, the detection
can exploit both the spatial and spectral properties. But the detection of subpixel objects can only be
achieved by exploiting the spectral properties. Since the spectrum of a subpixel object is mixed with that of
the background, the resultant pixel contains a combined spectral signature and hence requires some kind of
(linear or nonlinear) separation of the constituent elements called the unmixing process. This paper focuses
on the algorithms for detection of low probability objects, both with full pixel and subpixel. The
endmembers have been evaluated assuming that only the data cube and the object signature are given. To
estimate the background subspace, we have applied the PCA algorithm to the data cube and finally applied
the VCA algorithm in order to estimate the background subspace signatures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Paraxial optical systems implemented entirely with thin lenses and propagation through free space and/or Graded Index (GRIN) media are quadratic phase systems (QPS). The effect of any arbitrary QPS on an input wavefield can be described using Linear Canonical Transform (LCT). In this paper, we examine a novel numerical implementation of Fast Linear Canonical Transform (FLCT). We then apply the results in various optical signal processing applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We use the characteristic function method to obtain the probability distribution for the random force in the independent harmonic oscillator model. This model encompasses many important physical problems. The general characteristic function method is reviewed for the classical case and formulated for the quantum mechanical case. We obtain the probability distribution at one time and also the joint distribution of force for two different times.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Time-Frequency Analysis has previously been successfully applied to characterize and quantify a variety of acoustic signals, including marine mammal sounds. In this research, Time-Frequency analysis is applied to human speech signals in an effort to reveal signal structure salient to the biometric speaker verification challenge. Prior approaches to speaker verification have relied upon signal processing analysis such as linear prediction or weighted Cepstrum spectral representations of segments of speech and classification techniques based on stochastic pattern matching. The authors believe that the classification of identity of a speaker based on time-frequency representation of short time events occurring in speech could have substantial advantages. Using these ideas, a speaker verification algorithm was developed1 and has been refined over the past several years. In this presentation, the authors describe the testing of the algorithm using a large speech database, the results obtained, and recommendations for further improvements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We argue that the standard definition of signal to noise ratio may be misleading when the signal
or noise are nonstationary. We introduce a new measure that we call local signal to noise ratio
(LSNR) which is well suited to take into account nonstationary situations. The advantage of our
measure is that it is a local property unlike the standard SNR which is a single number representing
the total duration of the signal. We simulated a number of cases to show that our measure is more
indicative of the noise and signal level for nonstationary situations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In array processing applications, it is desirable to extract the sources that generate the observed signals. There are various source separation and component extraction algorithms in literature including principal component analysis (PCA) and independent component analysis (ICA). However, most of these methods are not designed to deal with time-varying signals and thus are formulated in the time domain. In this paper, we introduce a new time-frequency based decomposition method using an information measure as the decomposition criteria. It is shown that under the assumption of disjoint source signals on the time-frequency plane, this method can extract the sources up to a scalar factor. Based on the QR decomposition of the mixing matrix, the source extraction algorithm is reduced to finding the optimal N-dimensional rotation of the observed time-frequency distributions. The proposed algorithm is implemented using the steepest descent approach to find the optimal rotation angle. The performance of the method is illustrated for example signals and compared to some well-known decomposition techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Signals with time-varying spectral content arise in a number of situations, such as in shallow water sound propagation, biomedical signals, machine and structural vibrations, and seismic signals, among others. The Wigner distribution and its generalization have become standard methods for analyzing such time-varying signals. We derive approximations of the Wigner distribution that can be applied to gain insights into the effects of filtering, amplitude modulation,
frequency modulation, and dispersive propagation on the time-varying spectral content of signals.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We address the problem of efficient resolution, detection and estimation of weak tones in a potentially massive
amount of data. Our goal is to produce a relatively small reduced data set characterizing the signals in the
environment in time and frequency. The requirements for this problem are that the process must be computationally
efficient, high gain and able to resolve signals and efficiently compress the signal information into a form
that may be easily displayed and further processed. We base our process on the cross spectral representation we
have previously applied to other problems. In selecting this method, we have considered other representations
and estimation methods such as the Wigner distribution and Welch's method. We compare our method to these
methods. The spectral estimation method we propose is a variation of Welch's method and the cross-power
spectral (CPS) estimator which was first applied to signal estimation and detection in the mid 1980's. The CPS
algorithm and the method we present here are based on the principles first described by Kodera et al. now
frequently called the reassignment principle.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper draws on an innovative, signal processing-based method that jointly analyzes the time and frequency
domains and uses that information to characterize and distinguish the deadly arc faults from the normal operational
faults. This paper introduces a variety of new power quality assessment tools developed with the purpose of both
detecting an arc fault faster than has yet been done and distinguishing the arc fault from other normal load operations
via time-localized spectral characterization. Based on the time and frequency localization of the arc faults, the time
varying impedances of the arc fault are modeled in terms of harmonic sources. The accomplishment of these objectives
would lead to new, advanced smart arc fault circuit breakers and the modeling & simulation of arc fault phenomena.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Renyi entropy, used as a data analysis tool, brings a helpful information in many practical applications, due to its relevant properties when dealing with time or space-frequency representations (TFR or SFR). It's characterized for providing generalized information content (entropy) of a given signal under a local constraint. This paper deals with the problem of discriminating isotrigon patterns against binary random backgrounds and it is extended also to deal with the multi-texture isotrigon case. Discrimination is performed through a suitable Renyi entropy normalization, extracted from a spatially oriented 1-D pseudo-Wigner distributions (PWD) of the test images. A previous step is carried out by substituting the original patterns by their respective localized spatially oriented Allan variances. In this way, discrimination can be improved through the anisotropic behavior shown by the Allan variance of the patterns. The method has been empirically evaluated using a family of isotrigon patterns embedded in a binary random background and also for the multiple isotrigon mosaics case and compared with other existing methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper provides a time-frequency approach to modeling and estimating the communication channel, and to
symbol transmission in multi-carrier wireless systems. The method is based on the evolutionary spectral theory
of signals and systems. Multi-carrier systems, such as orthogonal frequency division multiplexing (OFMD) and
multi-carrier spread spectrum (MCSS), are very efficient in fast fading channels. However, the basic pulse used
in the modulation causes dispersion in time-frequency, complicating inter-symbol and inter-channel interferences.
Our time-frequency approach deals separately with the channel modeling and estimation, and with the symbol
transmission. Using the properties of the response of the channel to a linear chirp signal it is shown that the
typical linear time-variant channel model, needed to characterize multi-path and Doppler, is simplified into a
linear time-invariant model of minimal order. Time and Doppler shifts are represented by equivalent time-shifts,
and are estimated in a blind fashion from the evolutionary kernel of the received signal. Thus, the linear chirp
signal is used as a pilot sequence to characterize the channel. A coherent receiver uses such information to detect
the sent symbols. A multi-user OFDM system is obtained using a linear chirp as the modulating signal for basic
pulses shifted in time, and by choosing the instantaneous frequency of the linear chirp to be of unity slope for an
optimal time-frequency lattice. This optimal lattice increases the transmission rate, diminishes the inter-symbol
and inter-channel interference, and provides a new way of looking at OFDM. This approach is extended to
MCSS. To illustrate the concepts, simulations, with different signal to noise ratios, are performed. The results
are encouraging and worth of further investigation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We are presenting an extension to the classic multiple signal classification method (MUSIC) developed by Schmidt and Bienvenu in 1979. While the classic MUSIC algorithm is limited to the detection of constant frequency sinusoids in white noise, the proposed new method is capable of detecting signals with a continuously varying instantaneous frequency. The method is based on the development of a discrete-time version of the generalized scale transform (GST) which was introduced by Nickel and Williams in 1999. As a byproduct we obtain techniques for discrete-time warp-shift invariant filtering which can be used in addition to the signal detection to separate signals with different instantaneous frequency contours.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We use time-frequency distributions to define local stationarity of a random process. We argue that local stationarity is achieved when the Wigner spectrum is approximately factorable. We show that when that is the case the autocorrelation function is the one considered by Silverman in 1957. Other time-frequency representations are also considered.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.