KEYWORDS: Particle filters, Image segmentation, Video, Monte Carlo methods, Particles, Video processing, Edge detection, Cameras, Image processing, Data modeling
Recently we have been concerned with locating and tracking images of fish in underwater videos. While edge detection and region growing have assisted in obtaining some advances in this effort, a more extensive, non-linear approach appears necessary for improved results. In particular, the use of particle filtering applied to contour detection in natural images has met with some success. Following recent ideas in the literature, we are proposing to use a recursive Bayesian model which employs a sequential Monte Carlo approach, also known as the particle filter. This approach uses the corroboration between two scales of an image to produce various local features which characterize the different probability densities required by the particle filter. Since our data consist of video images of fish recorded by a stationary camera, we are capable of augmenting this process by means of background subtraction. Moreover, we are proposing a method that does not require the pre-computation of the distributions required by the particle filter. The above capabilities are applied to our dataset for the purpose of using contour detection with the aim of eventual segmentation of the fish images and fish classification. Although our dataset consists of fish images, the proposed techniques can be employed in applications involving different kinds of non-stationary underwater objects. We present results and examples of this analysis and discuss the particle filter application to our dataset.
KEYWORDS: Particles, Field programmable gate arrays, Clocks, Nonlinear filtering, Particle filters, Digital signal processing, Signal processing, Control systems, Logic, Process modeling
The particle flow filters, proposed by Daum & Hwang, provide a powerful means for density-based nonlinear filtering but their computation is intense and may be prohibitive for real-time applications. This paper proposes a design for superfast implementation of the exact particle flow filter using a field-programmable gate array (FPGA) as a parallel environment to speedup computation. Simulation results from a nonlinear filtering example are presented to demonstrate that using FPGA can dramatically accelerate particle flow filters through parallelization at the expense of a tolerable loss in accuracy as compared to nonparallel implementation.
Most often, background subtraction and image segmentation methods use images or video captured using a single camera. However, segmentation can be improved using stereo images by reducing errors caused due to illumination fluctuations and object occlusion. This work proposes a background subtraction and image segmentation method for images obtained using a two camera stereo system. Stereo imaging is often employed in order to obtain depth information. On the other hand, the objective of this work is mainly to extract accurate boundaries of objects from stereo images, which are otherwise difficult to obtain. Improving the outline detection accuracy is vital for object recognition applications. An application of the proposed technique is presented for the detection and tracking of fish in underwater image sequences. Outline fish detection is a challenging task since fish are not rigid objects. Moreover, color is not necessarily a reliable means to segment underwater images, therefore, grayscale images are used. Due to these two reasons, and due to the fact that underwater images captured in non-controlled environments are often blurry and poorly illuminated, commonly used local correlation methods are not sufficient for stereo image matching. The proposed algorithm improves segmentation in several scenarios including cases where fish are occluded by other fish regions. Although the work concentrates on segmenting fish images, it can be employed in other underwater image segmentation applications where visible-light cameras are used.
Several pattern-matching techniques have focused on affine invariant pattern matching, mainly because rotation, scale, translation, and shear are common image transformations. In some situations, other transformations may be modeled as a small deformation on top of an affine transformation. This work presents an algorithm which aims at improving existing Fourier Transform (FT)-based pattern matching techniques in such a situation. The pattern is first decomposed into non-overlapping concentric circular rings, which are centered in middle of the pattern. Then, the FT of each ring is computed. Essentially, adding the individual complex-valued FTs provides the overall FT of the pattern. Past techniques used the overall FT to identify the parameters of the affine transformation between two patterns. In this work, it is assumed that the rings may be rotated with respect to each other, thus, parameters of transformations beyond the affine ones can be computed. The proposed method determines this variable angle of rotation starting from the FT of the outermost ring and moving inwards to the FT of the innermost ring. The variable angle of rotation provides information about the directional properties of a pattern. Two methods are investigated, namely a dynamic programming algorithm and a greedy algorithm, in order to determine the variable angle of rotation. The intuition behind this approach is that since the rings are not necessarily aligned in the same manner for different patterns, their ring FTs may also be rotated with respect to each other. Simulations demonstrate the effectiveness of the proposed technique.
Median filtering has been an effective way for reducing noise of the impulsive kind in images. Yet, the inherent problem
with median filters is that their performance may be limited if images are corrupted by a significant amount of noise. In
such cases, large median filters may have to be considered, resulting in the removal of fine image details. In order to
alleviate this problem, several techniques have been developed and presented in the literature with the purpose of
detecting the locations of noisy pixels and applying median filters only at those locations. As a result, image pixels not
associated to noise remain unaffected. In the recent past, a method in which noisy pixels were identified based on the
information extracted from four directional pixel neighborhoods was proposed. The technique used four directional
weighted median filters for processing the detected noisy pixels. It was shown that by considering different directional
neighborhoods around each pixel, the fine details of the image, such as thin lines, were preserved, even after filtering
was applied. This paper investigates an extension to the previous technique that uses local pixel neighborhoods, in
addition to directional ones, which cover a wider spectrum of shapes. The objective of this modification is to increase the
possibility of identifying at least one neighborhood which does not cross over fine image details. Comparisons between
the original and the proposed method suggest that considering a larger variety of pixel neighborhood shapes is beneficial
for impulsive noise detection and removal.
Current research on gaze tracking, specifically relating to mouse control, is often limited to infrared cameras. Since these
can be costly and unsafe to operate, inexpensive optical cameras are a viable alternative. This paper presents image
processing techniques and algorithms to control a computer mouse using an optical camera. Usually, eye tracking
techniques utilize cameras mounted on devices located relatively far away from the user, such as a computer monitor.
However, in such cases, the techniques used to determine the direction of gaze are inaccurate due to the constraints
imposed by the camera resolution in conjunction to limited size of the pupil. In order to achieve higher accuracy in pupil
detection, and therefore mouse control, the camera used is head-mounted and placed near one of the user's eyes.
Given the increasingly dense environment in both low-earth orbit (LEO) and geostationary orbit (GEO), a sudden
change in the trajectory of any existing resident space object (RSO) may cause potential collision damage
to space assets. With a constellation of electro-optical/infrared (EO/IR) sensor platforms and ground radar
surveillance systems, it is important to design optimal estimation algorithms for updating nonlinear object
states and allocating sensing resources to effectively avoid collisions among many RSOs. Previous work on
RSO collision avoidance often assumes that the maneuver onset time or maneuver motion of the space object
is random and the sensor management approach is designed to achieve efficient average coverage of the RSOs.
Few attempts have included the inference of an object's intent in the response to an RSO's orbital change.
We propose a game theoretic model for sensor selection and assume the worst case intentional collision of an
object's orbital change. The intentional collision results from maximal exposure of an RSO's path. The resulting
sensor management scheme achieves robust and realistic collision assessment, alerts the impending collisions,
and identifies early RSO orbital change with lethal maneuvers. We also consider information sharing among
distributed sensors for collision alert and an object's intent identification when an orbital change has been
declared. We compare our scheme with the conventional (non-game based) sensor management (SM) scheme
using a LEO-to-LEO space surveillance scenario where both the observers and the unannounced and unplanned
objects have complete information on the constellation of vulnerable assets. We demonstrate that, with adequate
information sharing, the distributed SM method can achieve the performance close to that of centralized SM in
identifying unannounced objects and making early warnings to the RSO for potential collision to ensure a proper
selection of collision avoidance action.
This paper investigates an approach for identification of small-scale precipitation structures within significantly
larger-scale structures in weather radar imaging. The technique utilizes directional smoothing filters to extract
directional information which not apparently observable within large precipitation events. The main goal is to
track these directional characteristics over time, and thus, to predict the overall motion of large structures for
the purpose of forecasting. The objective of this work is not to compete against other weather radar imagingbased
forecasting techniques, but to supplement them. Experimental results illustrate how tracking of directional
structures can be effectively performed.
KEYWORDS: Data modeling, Neural networks, Radar, Motion models, Signal to noise ratio, Matrices, Computer simulations, Systems modeling, Electrical engineering, Visual information processing
Radial Basis Function neural networks (RBFNN) have been used for tracking precipitation in weather imagery.
Techniques presented in the literature used RBFNN to model precipitation as a combination of localized envelopes
which evolve over time. A separate RBFNN was used to predict future values of the evolving envelope parameters
considering each parameter as a time series. Prediction of envelope parameters is equivalent to forecasting the
associated weather events. Recently, the authors proposed an alternative RBFNN-based approach for modeling
precipitation in weather imagery in a computationally efficient manner. However, the event prediction stage
was not investigated, and thus any possible trade-off between efficiency and forecasting effectiveness was not
examined. In order to facilitate such a test, an appropriate prediction technique is needed. In this work, an
RBFNN series prediction scheme explores the dependence of envelope parameters on each other. Although
different approaches can be employed for training the RBFNN predictor, a computationally efficient subset
selection method is adopted from past work, and adjusted to support parameter dependence. Simulations are
presented to illustrate that simultaneous prediction of the precipitation event parameters may be advantageous.
Large-scale weather radar signatures are easier to identify compared to smaller-scale events. The location of such
signatures can be predicted and tracked. Thus, large-scale signatures are useful in forecasting. Identification
of these signatures in radar imagery can be facilitated through the use of smoothing filters. In particular,
processing of radar imagery using directional smoothers has been shown to be more effective in retaining the
storm front characteristics compared to isotropic smoothers. Moreover, efficient directional smoothing techniques
have been developed that are capable of quickly processing large amounts of data. An advantage of smoothers
operating in the spatial domain is that they are capable of involving logical operations in order to determine
which pixels should be processed or neglected. This paper extents a recently introduced computationally efficient
separable/steerable Gaussian-based smoothing technique in three aspects. First, the technique is generalized so
that computationally efficient filters having shapes other than Gaussian with respect to their main orientation
can be designed. Second, it is shown that the technique presented in this work is more efficient that the
commonly used angular harmonic expansion. Third, a technique that combines directional and isotropic filtering
is introduced. The technique is capable of revealing directional structures hidden in large-scale signatures, and
thus be employed as a preprocessing step in forecasting applications.
Tracking of storm fronts in weather imagery is important for several weather-related applications. Coastal-area
weather radars provide coverage up to 200-250 miles into the ocean, and thus can help with tracking of
storm-fronts to support forecasting in those areas. Another application where tracking of storm fronts can be
of assistance is clutter/rain classification. Specifically, the path of a tracked event can be used to decide if the
particular event corresponds to precipitation or clutter. For instance, clutter usually appears to be a relatively
static event. Precipitation can be modeled as a mixture of localized functions, each changing in terms of shape,
position, and intensity. Tracking of precipitation events can be performed via tracking of the localized function
parameters. In this paper, the modeling of rain events using Radial Basis Function neural networks (RBFNN) is
studied. In the recent past, such techniques have been used for forecasting. Although effective, these techniques
have been found to be computationally expensive. In this work, we evaluate the feasibility of modeling rain
events using RBFNN in an efficient manner, and we propose modifications to existing techniques to achieve this
goal.
Human-computer interfacing (HCI) describes a system or process with which two information processors, namely
a human and a computer, attempt to exchange information. Computer-to-human (CtH) information transfer
has been relatively effective through visual displays and sound devices. On the other hand, the human-tocomputer
(HtC) interfacing avenue has yet to reach its full potential. For instance, the most common HtC
communication means are the keyboard and mouse, which are already becoming a bottleneck in the effective
transfer of information. The solution to the problem is the development of algorithms that allow the computer
to understand human intentions based on their facial expressions, head motion patterns, and speech. In this
work, we are investigating the feasibility of a stereo system to effectively determine the head position, including
the head rotation angles, based on the detection of eye pupils.
In this paper, we propose the use of directional Gabor filtering and multifractal analysis based quality control (QC) to
provide accurate identification of precipitation in weather data collected from meteorological-radar volume scans. The
QC algorithm is an objective algorithm that minimizes human interaction. The algorithm utilizes both textural and
intensity information obtained from the two lower-elevation reflectivity maps. Computer simulations are provided to
show the effectiveness of this algorithm.
KEYWORDS: LIDAR, Buildings, Vegetation, Digital filtering, Data modeling, Data processing, Image segmentation, Tin, Visual process modeling, Visual analytics
Obtaining high resolution Digital Elevation Models (DEMs) is a critical task for analysis and visualization in several
remote sensing applications. LIDAR technology provides an effective way for obtaining high-resolution topographic
information. This paper presents a texture-based novel automatic algorithm for DEM generation from LIDAR data. The
proposed technique uses multifractal-based textural features for object identification, combined with a maximum slope
filter. Although this work is concentrated on DEM generation, certain aspects of the algorithm make it suitable for
classification of LIDAR data into other types of data. Some experimental results are presented to illustrate the
effectiveness of the proposed algorithm.
SAR imaging has been extensively used in several applications including automatic target detection and recognition. In
this paper, a wavelet/fractal (WF)-based target detection technique is presented. The technique computes a fractal-based
feature on an edge image, as opposed to existing fractal methods that compute the fractal dimension on the original
image. The edge image is produced through the use of wavelets. The technique is evaluated for target detection in SAR
images, and compared with a previous fractal-based approach, namely the extended fractal (EF) model. Experimental
results illustrate that WF provides lower false alarm rates for the same probability of detection compared to EF.
Furthermore, it is shown that WF provides higher spatial resolution capabilities for the detection of closely located
targets.
This paper introduces a new autocorrelation (ACR)-based approach for pitch detection in speech, designed
especially to deal with voluntary and involuntary fast variations of the pitch period. The technique may be
employed independently, or may be used to substitute the traditional ACR function used in existing techniques.
Experimental results illustrate the effectiveness of the proposed technique in determining the pitch period, and
especially for rapid pitch period variations where the traditional ACR fails.
Skew detection in document images is an important pre-processing step for several document analysis algorithms. In this
work, we propose a fast method that estimates skew angles based on a local-to-global approach. Many existing
techniques that are based on connected component analysis, group together pixels in order to form small document
objects. Then, a Hough transform is used to estimate the skew angle. The connected components detection process
introduces an undesired overhead. Nearest neighbor based techniques are only based on local groups and thus fail to
achieve great skew accuracy. Techniques based on projections create 1-D profiles by successively rotating the document
in a range of angles. The detection speed can be accelerated considering rotations from coarse to fine. However, the
rotation and projection can be relatively slow. The proposed technique is characterized by both high processing speed
and high skew estimation accuracy. First, local ring-shaped areas are analyzed for an initial skew estimation by building
angle histograms between random points and the ring centers. Following a ring selection process, a single histogram is
obtained. A range of angles around the best candidates obtained from the initial skew estimation is further examined.
Experimental results have shown that the proposed technique yields superior results in terms of estimation accuracy and
speed compared to existing techniques.
In this paper a target detection technique based on a rotational invariant wavelet-based scheme is presented. The
technique is evaluated on SAR imaging and compared with a previous fractal-based technique, namely the extended
fractal (EF) model. Both techniques attempt to exploit the textural characteristics of SAR imagery. Recently a
wavelet/fractal feature set, similar to the proposed one, was compared with a feature set similar to EF for a general
texture classification problem. The wavelet technique yielded lower classification error than EF, which motivated the
comparison between the two techniques presented in this paper. Experimental results show that the proposed technique
has the potential for providing lower false alarm rates compared to EF.
KEYWORDS: Digital watermarking, Image processing, Image compression, Quantization, Signal processing, Digital filtering, Visualization, Feature extraction, Signal to noise ratio, Image filtering
In this paper a robust watermarking technique based on Vector Quantization and the preciously developed spread
spectrum robust watermarking technique is proposed. In this work, the watermark is embedded in both the DCT and
codebook domains. Results illustrate that the proposed technique provides an improvement over the spread spectrum
watermarking technique in terms of robustness for various signal processing attacks. A discussion on the robustness of
the technique against the dead-lock and collusion problems is also provided.
In this paper, we propose the use of optoelectronic joint transform correlator (JTC) and multifractal analysis based quality control (QC) to provide accurate, real-time identification of precipitation in weather data collected from meteorological-radar volume scans. The multifractal based QC algorithm is an objective algorithm that minimizes human interaction. The algorithm utilizes both textural and intensity information obtained from the two lower-elevation reflectivity maps. The multifractal exponents are obtained using the JTC system. Computer simulations are provided to show the effectiveness of this system.
This paper introduces an approach for synthesizing natural textures, with emphasis on quasi-periodic and structural textures. The process consists of two stages. In the first stage, the basic textural elements (texels) and the basic textural structure are determined. This is achieved by identifying two fundamental frequencies in the texture, for two different orientations. The basic structure is a non-regular mesh that defines the place holders for texels. We call such place holders e-texels (empty texels). In the second stage, a new textural structure is designed from the original one, and its e-texels are filled in by texels obtained from the original patch. Same texture texels are expected to possess a high degree of similarity, thus the new structure could be filled in at random. However, a transition probability approach is used in order to retain local textural characteristics. More specifically, assuming that texel A is the last texel placed in the new structure, the e-texel closest to A is found. The e-texel is replaced by texel B from the old structure if the relative position between A and the e-texel is similar to the relative position between A and B in the old structure. This technique is an extension of a general texture synthesis technique previously developed by the author. The proposed technique is suited for structural textures since blockage effects are eliminated by allowing irregular shape texels to be merged, contrary to the previous general technique where the blocks merged are squares. Results show that the proposed method is successful in synthesizing structural textures.
This paper introduces an approach for synthesizing natural textures. Textures are modeled using a block-transition probabilistic model. In the training phase, the original textured image is split into equal size blocks, and clustered using the k-means clustering algorithm. Then, the transition probabilities between block-clusters are calculated. In the synthesis phase, the algorithm generates a sequence of indices, each representing a block-cluster, based on the transition probabilities. One advantage of this method over previous block sampling techniques is its stability. More specifically, the texture is synthesized block-by-block in a raster order. The block at a specific location is selected from one of the original image blocks. Thus, synthesis does not lead to artifacts. Additionally, the algorithm uses pre- and post- filtering. The image is filtered by a predictive filter, and the residual image is modeled using the probabilistic approach. The final synthesized image is the result of filtering the residual image by the inverse filter. Using pre- and post- processing eliminates the blockage effect. Moreover, the algorithm is computationally inexpensive, and the synthesis phase is particularly fast since it only requires generation of a sequence of cluster indices. Results show that the proposed method is successful in synthesizing realistic natural textures for a large variety of textures.
This paper introduces a novel adaptive cascade architecture for image compression. The idea is an extension of parallel neural network (NN) architectures which have been previously used for image compression. It is shown that the proposed technique results in higher image quality for a given compression ratio than existing NN image compression schemes. It is also shown that training of the proposed architecture is significantly faster than that of other NN-based
techniques and that the number of learning parameters is small. This allows the coding process to include adaptation of the learning parameters, thus, compression does not depend on the selection of the training set as in previous single and parallel structure NN.
In this paper, we introduce a modification of the Fuzzy ARTMAP (FAM) neural network, namely, the Fuzzy ARTMAP with adaptively weighted distances (FAMawd) neural network. In FAMawd we substitute the regular L1-norm with a weighted L1-norm to measure the distances between categories and input patterns. The distance-related weights are a function of a category's shape and allow for bias in the direction of a category's expansion during learning. Moreover, the modification to the distance measurement is proposed in order to study the capability of FAMawd in achieving more compact knowledge representation than FAM, while simultaneously maintaining good classification performance. For a special parameter setting FAMawd simplifies to the original FAM, thus, making FAMawd a generalization of the FAM architecture. We also present an experimental comparison between FAMawd and FAM on two benchmark classification problems in terms of generalization performance and utilization of categories. Our obtained results illustrate FAMawd's potential to exhibit low memory utilization, while maintaining classification performance comparable to FAM.
In this paper we introduce a feature set for texture segmentation, based on an extension of fractal dimension features. Fractal dimension extracts roughness information from images considering all available scales at once. In this work a single scale is considered at a time so that textures that do not possess scale invariance are sufficiently characterized. Single scale features are combined with multiple scale features for a more complete textural representation. Wavelets are employed for the computation of single and multiple scale roughness features due to their ability to extract information at different resolutions. Features are extracted at multiple directions using directional wavelets, and the feature vector is finally transformed to a rotational invariant feature vector that retains the texture directional information. An iterative K-means scheme is used for segmentation. The use of the roughness feature set results in high quality segmentation performance. The feature set retains the important properties of fractal dimension based features, namely insensitivity to absolute illumination and contrast.
In this paper we present an automatic algorithm for the removal of echoes that are caused due to anomalous propagation(AP) from the lower radar elevation. The algorithm uses textural information as well as intensity characteristics of reflectivity maps that are obtained from the two lower radar elevations. The texture of the reflectivity maps is analyzed with the help of multifractals. We present examples that illustrate the efficiency of our algorithm. We compare our algorithm with a manual algorithm that was developed by NASA/TRMM for AP removal, in terms of total rain accumulation and in terms of the number of pixels removed.
In this paper we present a modification of the test phase of ARTMAP-based neural networks that improves the classification performance of the networks when the patterns that are used for classification are extracted from noisy signals. The signals that are considered in this work are textured images, which are a case of 2D signals. Two neural networks from the ARTMAP family are examined, namely the Fuzzy ARTMAP (FAM) neural network and the Hypersphere ARTMAP (HAM) neural network. We compare the original FAM and HAM architectures with the modified ones, which we name FAM-m and HAM-m respectively. We also compare the classification performance of the modified networks, and of the original networks when they are trained with patterns extracted from noisy textures. Finally, we illustrate how combination of features can improve the classification performance for both the noiseless and noisy textures.
In this paper texture classification is studied based on the fractal dimension (FD) of filtered versions of the image and the Fuzzy ART Map neural network (FAMNN). FD is used because it has shown good tolerance to some image transformations. We implemented a variation of the testing phase of Fuzzy ARTMAP that exhibited superior performance than the standard Fuzzy ARTMAP and the 1-nearest neighbor (1-NN) in the presence of noise. The performance of the above techniques is tested with respect to segmentation of images that include more than one texture.
This paper describes an approach to segmentation of textured grayscale images using a technique based on image filtering and the fractal dimension (FD). Twelve FD features are computed based on twelve filtered versions of the original image using directional Gabor filters. Features are computed in a window and mapped to the central pixel of this window. An iterative K-means-based algorithm which includes feature smoothing and takes into consideration the boundaries between textures is used to segment an image into a desired number of clusters. This approach is partially supervised since the number of clusters has to be predefined. The fractal features are compared to Gabor energy features and the iterative K- means algorithm is compared to the original K-means clustering approach. The performance of segmentation for noisy images is also studied.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.