PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Blind separation of linearly mixed white Gaussian sources is impossible, due to rotational symmetry. For this reason, all blind separation algorithms are based on some assumption concerning
the fashion in which the situation departs from that insoluble case. Here we discuss the assumption of sparseness and try to put various algorithms that make the sparseness assumption in a common framework. The main objective of this paper is to give some rough
intuitions, and to provide suitable hooks into the literature.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A Cauchy Machine has been applied to solve nonlinear space-variant blind imaging problem with positivity constraints on the pixel-by-pixel basis. Nonlinearity parameters, de-mixing matrix and source vector are found at the minimum of the thermodynamics free energy H=U-T0S, where U is estimation error energy, T0 is temperature and S is the entropy. Free energy represents dynamic balance of an open information system with constraints defined by data vector. Solution was found through Lagrange Constraint Neural Network algorithm for computing the unknown source vector, exhaustive search to find unknown nonlinearity parameters and Cauchy Machine for seeking de-mixing matrix at the global minimum of H for each pixel. We demonstrate the algorithm capability to recover images from the synthetic noise free nonlinear mixture of two images. Capability of the Cauchy Machine to find the global minimum of the golf hole type of landscape has hitherto never been demonstrated in higher dimensions with a much less computation complexity than an exhaustive search algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Lagrangian Artificial Neural Network (LANN) has been proposed recently for hyperspectral image classification. It is an unsupervised technique that can simultaneously estimate the endmembers and their abundance fractions without any prior information. Since the implementation of the LANN is completely unsupervised, the number of estimated abundance fraction images (AFI) is equal to the number of bands, which display the distribution of the corresonding endmember materials in the image scene. We find out that many AFIs are highly correlated and visually similar. In order to facilitate the following data assessment, a two-stage post-processing approach will be proposed. First, the number of endmembers ns resident in the image scene is estimated using a Neyman-Pearson hypothesis testing-based eigen-thresholding method. Next, an automatic searching algorithm will be applied to find the most distinct AFIs using the divergence as criterion, where the threshold is adjusted until the number of selected AFIs equals the ns estimated in the first stage. The experimental results using AVIRIS data shows the efficiency of the proposed post-processing technique in distinct AFI selection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Observed and simulated global temperature series include the effects
of many different sources, such as volcano eruptions and El Nino
Southern Oscillation (ENSO) variations. In order to compare the
results of different models to each other, and to the observed data,
it is necessary to first remove contributions from sources that are
not commonly shared across the models considered. Such a separation
of sources is also desired in order to assess the effect of human
contributions on the global climate. Atmospheric scientists currently use parametric models and iterative techniques to remove the effects of volcano eruptions and ENSO variations from global temperature trends. Drawbacks of the parametric approach include the non-robustness of the results to the estimated values of the parameters, and the possible lack of fit of the data to the model. In this paper, we investigate ICA as an alternative method for separating independent sources in global temperature series. Instead of fitting parametric models, we let the data guide the estimation, and separate automatically the effects of the underlying sources. We first assess ICA on simple artificial datasets to establish the conditions under which ICA is feasible in our context, then we study its results on climate data from the National Centers for Environmental Predictions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We used a higher-order correlation-based method for signal denoising. In our approach, we determined which wavelet coefficients contained mostly noise, or signal, based on higher-order statistics. Because the higher that second-order moments of the Gaussian probability function are zero, the third-order correlation coefficient will not have a statistical contribution from Gaussian noise. We obtained results for both 1-D signals and images. In all cases, our approach showed improved results when compared to a more popular denoising method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper concerns the possibilities that side scan sonar have to determine the bathymetry. New side scan sonars, which are able to image the sea bottom with a high definition, estimate the relief with the same definition as conventional sonar images, using an interferometric multisensors system. Drawbacks concern the accuracy and errors of the numerical altitude model. Interferometric methods use a phase difference to determine a time delay between two sensors. The phase difference belongs to a finite interval (-π, +π), but the time delay between two sensors does not belong to a finite interval: the phase is 2π biased. The used sonar is designend for the use of the vernier technique, which allows to remove this bias. The difficulty comes from interferometric noise, which generates errors on the 2π bias estimation derived from the verier. The traditional way to reduce noise impact on the interferometric signal, is to average data. This method does not preserve the resolution of the bathymetric estimation. This paper presents an attempt to improve the accuracy and resolution of the interferometric signal through a wavelets based method of image despecklization. Traditionally, despecklization is processed on the logarithm of absolute value of the signal. But for this application, the proposed interferometric despecklizaiotn is achieved directly on the interferometric signal by integrating information, guided by the despeckled image. Finally, this multiscale analysis corresponds to an auto adaptive average filtering. A variant of this method is introduced and based on this assumption. This method used the identify function to reconstruct the signal. On the presented results, phase despecklization improves considerably the quality of the interferometric signal in terms of to noise ratio, without an important degradation of resolution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Detecting man-made objects concealed in dense foliage is an ongoing challenge in military imaging. There is a need to detect, recognize and classify objects and background. Wavelets will be combined with spectral derivatives to detect the 'red edge' to distinguish between foliage and camouflaged man-made objects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This research defined the underpinning concepts of a system that was highly secure, yet was efficient and non-invasive enough for everyday use. The live biometric authenticity check augmented invariant fingerprints with variable live features offered superior security by combining physical characteristics of the user’s with a passcode (numerical PIN) or passphrase (a string of words), and might also easily be augmented with other biometric video imaging devices for the utmost security.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Over the last decade many criminal justice agencies have replaced their fingerprint card based systems with electronic processing. We examine these new systems and find that image acquisition to support the identification application is consistently a challenge. Image capture and compression are widely dispersed and relatively new technologies within criminal justice information systems. Image quality assurance programs are just beginning to mature.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Flexible systems are used in many industrial designs to reduce weight and power consumption. Undesirable frequencies are common and may interfere with control systems. In many aerospace flexible dynamic systems the interfering frequency shifts due to the nonlinearities and coupling within the system. The conventional approach in aerospace is to generate a large number of individual notch filters to protect the control systems. This requires a significant verification and validation activity, as well as a large storage capacity for the filter coefficients. In this paper an MRAN neural network system is used to control a multivariable linearized space structure. Growth and pruning ideas are reviewed and applied to the space structure model. Proportional integral (PI) and lead-lag update rules are compared to a typical update rule.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Basic aerodynamic coefficients are modeled as functions of angle of attack, speed brake deflection angle, Mach number, and side slip angle. Most of the aerodynamic parameters can be well-fitted using polynomial functions. We previously demonstrated that a neural network is a fast, reliable way of predicting aerodynamic coefficients. We encountered few under fitted and/or over fitted results during prediction. The training data for the neural network are derived from wind tunnel test measurements and numerical simulations. The basic questions that arise are: how many training data points are required to produce an efficient neural network prediction, and which type of transfer functions should be used between the input-hidden layer and hidden-output layer. In this paper, a comparative study of the efficiency of neural network prediction based on different transfer functions and training dataset sizes is presented. The results of the neural network prediction reflect the sensitivity of the architecture, transfer functions, and training dataset size.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a geometrical Fuzzy ART (G-Fuzzy ART) neural network architecture is presented. While the original Fuzzy ART requires preprocessing of the input patterns (complement coding), the G-Fuzzy ART accept the input patterns without complement coding. The weights of the G-Fuzzy ART refer directly to the borders of the hyper-rectangle while the weights in the Fuzzy ART refer to the endpoints of the hyper-rectangle. The size of the hyper-rectangle is directly given by the size of the weight. The geometrical choice function (the Hamming distance of the input pattern to the hyper-rectangle) and the weight update formulas for the G-Fuzzy ART are presented. The G-Fuzzy ART retains the notion of resonance by retaining the vigilance criterion applied directly to the new size of the hyper-rectangle. It also retains the min-max fuzzy operators.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We investigate the pre-processing of sonar signals prior to
using neural networks for robust differentiation of commonly
encountered features in indoor environments. Amplitude and time-of-flight measurement patterns acquired from a real sonar system are pre-processed using various techniques including wavelet transforms, Fourier and fractional Fourier transforms, and Kohonen's self-organizing feature map. Modular and non-modular neural network structures trained with the back-propagation and generating-shrinking algorithms are used to incorporate learning in the identification of parameter relations for target primitives. Networks trained with the generating-shrinking algorithm demonstrate better generalization and interpolation capability and faster convergence rate. The use of neural networks trained with the back-propagation algorithm, usually with fractional Fourier transform or wavelet pre-processing results
in near perfect differentiation, around 85% correct range estimation and around 95% correct azimuth estimation, which would be satisfactory in a wide range of applications. Neural networks can differentiate more targets, employing only a single sensor node,
with a higher correct differentiation percentage than achieved with previously reported methods employing multiple sensor nodes.
The success of the neural network approach shows that the sonar signals do contain sufficient information to differentiate a considerable number of target types, but the previously reported methods are unable to resolve this identifying information.
This work can find application in areas where recognition of patterns
hidden in sonar signals is required. Some examples are system control
based on acoustic signal detection and identification, map building,
navigation, obstacle avoidance, and target-tracking applications
for mobile robots and other intelligent systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The match between the physics of MEG and the assumptions of the most well developed blind source separation (BSS) algorithms (unknown instantaneous linear mixing process, many sensors compared to expected recoverable sources, large data limit) have tempted researchers to apply these algorithms to MEG data. We review some of these efforts, with particular emphasis on our own work.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many accidents are associated with a driver or machine operator's alertness level. Drowsiness often develops as a result of repetitive or monotonous tasks, uninterrupted by external stimuli. In order to enhance safety levels, it would be most desirable to monitor the individual's level of attention. In this work, changes in the power spectrum of the electroencephalographic signal (EEG) are associated with the subject's level of attention. This study reports on the initial research carried out in order to answer the following important questions: (i) Does a trend exist in the shape of the power spectrum, which will indicate the state of a subject's alertness state (drowsy, relaxed or alert)? (ii) What points on the cortex are most suitable to detect drowsiness and/or high alertness? (iii) What parameters in the power spectrum are most suitable to establish a workable alertness classification in human subjects? In this work, we answer these questions and combine power spectrum estimation and artificial neural network techniques to create a non-invasive and real - time system able to classify EEG into three levels of attention: High, Relaxed and Drowsiness. The classification is made every 10 seconds o more, a suitable time span for giving an alarm signal if the individual is with insufficient level of alertness. This time span is set by the user. The system was tested on twenty subjects. High and relaxed attention levels were measured in randomise hours of the day and drowsiness attention level was measured in the morning after one night of sleep deprivation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Alpha entrainment caused by exposure to a background stimulus
continuously flickering at a rate of 8 1/3 Hz was affected by
the appearance of a foreground target stimulus to which the
subjects were requested to press a button. With the use of
bipolar derivations (to reduce volume conduction effects), scalp
recorded EEG potentials were subjected to a continuous wavelet
transform using complex Morlet wavelets at a range of scales.
Complex Morlet wavelets were used to calculate efficiently instantaneous amplitudes and phases on a per-trial basis, rather
than using the Hilbert transform on band-pass filtered data.
Multiple scales were employed to contrast the pattern of alpha
activity with those in other bands, and to determine whether
the harmonics observed in the spectral analysis of the data were simply a result of the non-sinusoidal response to the entraining signal or a distinct neural phenomenon. We were thus able to calculate desynchronization/resynchronization for both the entrained and non-entrained alpha activity. The occurance of the target stimulus caused a sharp increase in amplitude in both the entrained and non-entrained alpha activity, followed by a sharp decrease, and then a return to baseline, over a period of 2.5 seconds. However, the entrained alpha activity showed a much more rapid recovery than non-entrained activity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents new model-free fMRI methods based on independent component analysis. Commonly used methods in analyzing fMRI data, such as the student's t-test and cross correlation analyis, are model-based approaches. Although these methods are easy to implement and are effective in analyizing data with simple paradigms, they are not applicable in situations in which pattern of neural response are complicated and when fMRI response is unknown. In this paper we evaluate and compare three different neural algorithms for estimating spatial ICA on fMRI data: the Informax approach, the FastICA approach, and a new topographic ICA approach. A comparison of these new methods with principal component analysis and cross correlation analysis is done in a systematic fMRI study determining the spatial and temporal extent of task-related activation. Both topographic ICA and FastICA outperform principal component analysis and Infomax neural network and standard correlation analysis when applied to fMRI studies. The applicability of the new algorithms is demonstrated on experimental data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Accurately recognizing speech is a difficult task. Differences in gender, accent, pace, tone, as well as defects in the recording equipment and environmental noise can disturb a voice signal. Speech recognition systems are commonly studied and implemented by companies trying to alleviate problems, such as illness or injury, or to increase overall efficiency. This research uses wavelet analysis with several traditional methods to study similarities among sound signals. Through a series of seven steps, a similarity analysis of some voice signals from the same speaker as well as from different speakers is performed. The efficiency of four different wavelets (Haar, db2, db4 and Discrete Morlet), different correlation methods developed previously or in this research, and two different Dynamic Time Warping methods are studied in this research. Through several experiments, it will be shown that these techniques produce excellent results for signals by the same speaker. Based on the limited number of cases studied in this research, some evidence will be presented that suggests the proposed methods on this research are more effective for recognizing male voice files than those of females.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Moving targets in SAR images cause phase modulation of the phase
history corresponding to the target. Depending on the motion, such
modulations cause SAR image distortions. A simple case is an
oscillating point reflector which causes sinusoidal phase
modulation. Such signals can be analyzed with time-frequency
techniques. The techniques reveal the time dependent Doppler frequency
corresponding to the target. Using both direct instantaneous frequency
estimates and quadratic time-frequency methods, we may estimate the
oscillation parameters. Examples are given from simulated and real
data. In the case of real data, the results agree well with ground
truth. The direct estimates are best for high signal-to-noise ratios
and single reflectors. In other cases the more sophisticated and
computationally intensive quadratic methods perform best.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The use of ISAR imagery for Automatic Target Recognition is seriously hampered by the difficulty of target motion compensation. Phase perturbations that result from target maneuvers during the processing interval need to be corrected for. In a previous paper, we demonstrated the use of the local Radon transform for estimating the radial velocity of a target. This estimate can then be used to align a sequence of range profiles prior to cross-range compression. In this paper, we make a quantitative comparison of the results that are obtained using different types of local Radon transformations. In the second part of this paper we outline an algorithm for compensation of phase perturbation that are caused by non-uniform target rotation. The algorithm has been tested on simulated data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Gross distortion on ISAR images due to small perturbed target motion is experimentally verified. Quantitative results obtained under controlled experimental conditions on the target's motion have confirmed that large distortion in the ISAR image of the target can occur. The distortion is a consequence of the phase modulation effect in the radar return signal, as a result of small fluctuating perturbed motion of the target. This gives rise to distortion of the image in the cross-range direction. The distortion can be refocused by applying time-frequency analyiss. The Short-Time Fourier transform is used in this paper to examine some of the issues in refocusing distorted ISAR images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The coherence of successive dwells from an Over-the-Horizon (OTH) High-Frequency (HF) radar (3-30 MHz) is investigated using a technique based on the spectrogram. Although the data are coherent within a dwell, it is not known if coherence is preserved from one dwell to the next, due to possible limitations of the signal processor. (Incoherence imposed by the propagating medium is not considered here.) Land clutter consisting of a sufficiently clean complex sinusoid should reveal the coherence of data across dwells. Unfortunately, the ionosphere imposes multipath and distortion such that it is difficult to obtain sufficiently clean dwells, but a few cases indicate a linear spectral phase offset on the second dwell consistent with a virtual shift of the time origin that can, in principle, be easily compensated for.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Doppler effect is a widely treated phenomena in both radar and sonar for objects undergoing uniform motion. There are many different models (Censor has written a history of the subject) one can use to derive the Doppler effect. The treatment of non-uniform motion is not widely discussed in the literature of radar and sonar. Some authors argue it is negligible, while others refer to work dating back to Kelly in the early sixties. The treatment by Kelly, based on waveform analysis in acoustics, is difficult to justify in electromagnetism. Using the language of waveform analysis it is difficult to determine when approximations are justified by the physics of the waveform interaction and when they aren't. By returning to electromagnetic considerations in the derivation and subsequent analysis, issues associated with the correct physics and proper approximations become transparent. We present a straight forward analysis of the non-uniform Doppler effect based on the relativistic mirror (moving boundary) that is undergoing arbitrary motion. The resultant structure of the scattered waveform provides a simple representation of the effect of non-uniform motion on the scattered waveform that can be more easily analyzed. This work is a continuation of earlier work done by Censor, De Smedt, and Cooper. This analysis is independent of narrow-band assumptions so it is completely general. Non-uniform motion can produce two types of effects associated with the Doppler spectrum, a baseband line that isn't straight and micro-Doppler off of the baseband that produces complicated sideband behavior. Complicated baseband and micro-Doppler are illustrated by using the example of a particular waveform, the continuous wave (CW) which is analyzed for a number of examples of interest to the radar community. Application of this information is then discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we review the micro-Doppler effect induced by micro-motion dynamics and provide mathematics of the micro-Doppler effect by introducing micro motions to the conventional Doppler effect. Micro-Doppler effect was originally introduced in laser systems, but it can also be observed in microwave radar systems. We introduce some interesting results on observing micro-Doppler phenomenon using microwave radar systems and potential applications to target’s feature extraction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The traditional method to extract target contour from aerial target image is changing the aerial image into a gray level image with multiple thresholds or binary image with single threshold. From the edge of target, contour can be extracted according to the changed value. The traditional method is useful only when contrast between target and background is in the proper degree. Snakes are curves defined within an image domain that can move under the influence of internal force coming from within the curve itself and external forces are defined so that the snake will conform to an object boundary or other desired features within an image. Snakes have been proved an effective method and widely used in image processing and computer vision. Snakes synthesize parametric curves within an image domain and allow them to move toward desired edges. Particular advantages of the GVF(Gradient Vector Flow) snakes over a traditional snakes are its insensitivity to initialization and its ability to move into boundary concavities. Its initializations can be inside, outside, or across the object’s boundary. The GVF snake does not need prior knowledge about whether to shrink or expand toward the boundary. This increased capture range is achieved through a diffusion process that does not blur the edges of themselves.
Affected by the light from different incident angle, the brightness of aerial target surface changed greatly in a complicate mode. So the GVF snakes is not fast, accurate and effective all the time for this kind of images. A new contour extracting method, GVF Snakes Combined with wavelet multi-resolution Analysis is proposed in this paper. In this algorithm, bubble wavelet is used iteratively to do the multi resolution analysis in the order of degressive scale before GVF Snakes is used every time to extract accurate contour of target. After accurate contour is extracted, polygon approximation is used to extract characteristics to realize the recognition of aerial target. The process is in the following: Step 1: use bubble wavelet filter to cut big part of the noises, weakening false edges. Step 2: initialize active contour and control the contour’s move according to GVF to get a new contour. Step 3: decrease the scale of filter, and use the new contour as the initial contour and control the contour’s move to get new contour again. Step 4: repeat step 3 till the set scale is reached. The last new contour is the final contour. Step 5: find the center determine an axis by calculate distance between every point on the final contour to the center. Step 6: adjust the distance threshold and combine the points until the contour is changed into a polygon with fixed angle number which is best fit the target recognition demand. Step 7: use the polygon to match the target plate to recognize target. Applied the new algorithm to aerial target images of a helicopter and a F22 battleplan, the contour extraction and polygon approximation results show that targets can be matched and recognized successfully. This paper mainly focuses on contour extraction and polygon approximation in the recognition area.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We show that the Fuzzy Membership Function (FMF) is learnable with underlying chaotic neural networks for the open set probability. A sigmoid N-shaped function is used to generate chaotic signals. We postulate that such a chaotic set of innumerable realization forms a FMF exemplified by fuzzy feature maps of eyes, nose, etc., for the invariant face classification. The CNN with FMF plays an important role for fast pattern recognition capability in examples of both habituation and novelty detections. In order to reduce the computation complexity, the nearest-neighborhood weight connection is proposed. In addition, a novel timing-sequence weight-learning algorithm is introduced to increase the capacity and recall of the associative memory. For simplicity, a piece-wise-linear (PWL) N-shaped function was designed and implemented and fabricated in a CMOS chip.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There is a decision about which wavelet is best for each application and each input image/signal, since the type of wavelet chosen affects the performance of the algorithm. In the past, researchers have chosen the wavelet shape based on (a) ease of use, (b) input signal properties, (c) a 'library' search of possible shapes, and/or (d) their own experience and intuition. We demonstrate a technique that determines the best wavelet for each image from within the class of all orthogonal wavelets (tight frames) with a fixed number of coefficients. In our technique, we compress the input with a particular wavelet, calculate the PSNR, then adapt or adjust the wavelet coefficients dynamically to achieve the best PSNR. This 'feedback-based' approach is based on traditional adaptive filtering algorithms. The problem of building an adaptable or feedback-based wavelet filter was simplified when Lai and Roach developed an explicit parameterization of the wavelet scaling functions of short support (more specifically, a parameterization of all tight frames). The representation has one parameter for length-4 wavelets, two free parameters for length-6 wavelets, and multiple parameters for longer wavelets. As the parameter(s) are perturbed, the scaling function’s shape is also perturbed. However, it changes in such a way that the wavelet constraints are still fulfilled.
We have applied the feedback-based approach using the parameterized wavelets in an image compression scheme. For short wavelet filters (length-4 up to length-10), we have confirmed that there is a wide range of performance as the wavelet shape is varied and the feedback procedure indeed converges towards optimal orthogonal wavelet filters for a given support size and a chosen image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
On a basis of local-topological approach we propose modification of a correlation integral approach that solves a problem of minimizing computer resources for fractal analysis implementation and makes the employed algorithm insensitive to enlarging phase space dimension. The effective approach for estimating characteristic equation components for extended Jacobian-matrix is developed, that allows determination of characteristic exponents with respect to nonlinear feedback modeling. Application of developed methods to neural systems is explored.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Empirical mode decomposition, or Huang decomposition, is used to extract transition signatures from the nonlinear dynamics of spatially and temporally varying oscillators. Two types of transitions are studied, the delay transition and the Maxwell transition, which correspond in thermodynamics to first and second order phase transitions respectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the advent of high throughput DNA microarrays and the large cross section of the gene activity, or expression, that it provides, the potential for the early detection and diagnosis of cancer before morphogenesis has dramatically increased. While many statistical analysis methods, such as cluster analysis, have been developed to tap into this enormous information source, a reliable method of early detection and diagnosis has yet to be developed. In this paper we propose using independent component analysis (ICA) as a first step in a process to identify diseased tissue solely based on its gene expression profile. In the ICA vernacular, a set of tissues samples with a known disease can be viewed as the sensors while certain biological processes, including the manifestation of the disease, can be viewed as the signals. The goal then is to identify one or more demixed signals, or signatures, that can be associated with the given disease. The demixing matrix can then be used to find the biological signals of an unknown sample, which might, in turn, be used for diagnosis when compared to the previously determined disease signatures. In this paper we explore the use of this technique on a previously studied melanoma dataset (Bittner, et. al., 2000).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Nonlinear distortions are always introduced to biomedical signals during the acquisition stage, which consequently fail the traditional linear independent component analysis (ICA) methods for further signal processing. This paper investigates the non-linearity system function in the pre-amplifier and A/D converter of the biomedical instruments. A polynomial general model structure with adjustable parameters to approximate the nonlinear relation is proposed for medical instruments. Model parameters are validated using a typical electrocardiograph (ECG) acquisition system with sinusoids of varying frequency and amplitude. Thus the inverse nonlinear transform is applied to acquired data to cancel the nonlinear distortions. The ICA method is then applied to the originally linear mixed data, non-rectified data and also rectified data and the results are favorably compared in the designed experiment using both clinical ECGs and the simulated data from cardiac simulators.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A ground simulator GSSS to be used at SAR factory, laboratory and imaging users is developed. When synchronization is set up between a real SAR and the GSSS, the system can be used to check, test the SAR system or make diagnosis for it. In this paper, we discuss the basic principle of GSS, and mainly focus on issues of SAR imaging, moving target detection and application of time-frequency transform algorithm. Several simulated examples of moving target detection and stationary scenery imaging are discussed. Simulation results are presented to validate the analysis and illumine new ideas.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the growing rate of interconnection among computer systems, network security is becoming a real challenge. Intrusion Detection System (IDS) is designed to protect the availability, confidentiality and integrity of critical network information systems. Today’s approach to network intrusion detection involves the use of rule-based expert systems to identify an indication of known attack or anomalies. However, these techniques are less successful in identifying today’s attacks. Hackers are perpetually inventing new and previously unanticipated techniques to compromise information infrastructure. This paper proposes a dynamic way of detecting network intruders on time serious data. The proposed approach consists of a two-step process. Firstly, obtaining an efficient multi-user detection method, employing the recently introduced complexity minimization approach as a generalization of a standard ICA. Secondly, we identified unsupervised learning neural network architecture based on Kohonen’s Self-Organizing Map for potential functional clustering. These two steps working together adaptively will provide a pseudo-real time novelty detection attribute to supplement the current intrusion detection statistical methodology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A set of experiment was conducted to study the initial shock wave, which is generated in early launching stage and harmful to human and equipment. Profile of pressure-time history consists of shock wave and rocket noise, which are the two inherent flow features of rocket exhaust flow. This makes it difficult to calculate the typical parameters such as peak overpressure, positive duration and waveform coefficient. Thus the intensity of shock wave is hard to determine by traditional methods. Wavelet threshold de-noise is used in this paper to detect shock wave profile from noise. Daubechies and Symlets wavelet families are compared in threshold treating of shock wave data. Wavelet threshold plus a priori knowledge makes the initial shock wave well detected from rocket noise. Three characteristic parameters of shock wave are determined and compared in the study. The results show that low order wavelet with small support width can keep singularity of shock wave better.This helps to understand the performence of initial shock wave of rocket jet in engineering application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a modular-based low-voltage analog-front-end processor design in a 0.5mm double-poly double-metal CMOS technology for Ion Sensitive Field Effect Transistor (ISFET)-based sensor and H+ sensing applications. To meet the potentiometric response of the ISFET that is proportional to various H+ concentrations, the constant-voltage and constant current (CVCS) testing configuration has been used. Low-voltage design skills such as bulk-driven input pair, folded-cascode amplifier, bootstrap switch control circuits have been designed and integrated for 1.5V supply and nearly rail-to-rail analog to digital signal processing. Core modules consist of an 8-bit two-step analog-digital converter and bulk-driven pre-amplifiers have been developed in this research. The experimental results show that the proposed circuitry has an acceptable linearity to 0.1 pH-H+ sensing conversions with the buffer solution in the range of pH2 to pH12. The processor has a potential usage in battery-operated and portable healthcare devices and environmental monitoring applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the hardware implementation of a real-time video codec using reversible Wavelets. The TechSoft (TS) real-time video system employs the Wavelet differencing for the inter-frame compression based on the independent Embedded Block Coding with Optimized Truncation (EBCOT) of the embedded bit stream. This high
performance scalable image compression using EBCOT has been selected as part of the ISO new image compression standard, JPEG2000. The TS real-time video system can process up to 30 frames per second (fps) of the DVD format. In addition, audio signals are also processed by the same design for the cost reduction. Reversible Wavelets are used not only for the cost reduction, but also for the lossless applications. Design and implementation issues of the TS real-time video system are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Through the empirical analysis of financial return generating processes one may find features that are common to other research fields, such as internet data from network traffic, physiological studies about human heart beat, speech and sleep recorded time series, geophysics signals, just to mention well-known cases of study. In particular, long range dependence, intermittency, heteroscedasticity are clearly appearing, and consequently power laws and multi-scaling behavior result typical signatures of either the spectral or the time correlation diagnostics. We study these features and the dynamics underlying financial volatility, which can respectively be detected and inferred from high frequency realizations of stock index returns, and show that they vary according to the resolution levels used for both the analysis and the synthesis of the available information. Discovering whether the volatility dynamics are subject to changes in scaling regimes requires the consideration of a model embedding scale-dependent information packets, thus accounting for possible heterogeneous activity occurring in financial markets. Independent component analysis result to be an important tool for reducing the dimension of the problem and calibrating greedy approximation techniques aimed to learn the structure of the underlying volatility.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Video compression was widely adopted for bandwidth saving reasons in Internet and in digital TV. Although advanced compression methodologies were exceedingly desirable for much higher compression ratios and much better quality of a streaming video, the standard compression methods remained nevertheless at the level of JPEG2000 for single frames of images using DCT & wavelets, and MPEG using DCT and exhaustive pixel correlations for the optical flow compression for the video. However, with the recent advent of sophisticated sensory technologies for NASA space missions, both the frame rate and the resolution of video streams become soon beyond the traditional capability of those DCT based compression methods. The goal was to find fundamental units of video that could be applied to the ultra-high frame rates of video streams. In this paper a novel implementation of video compression was based on the singularity maps in both space and time for the first time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.