This paper discusses (a) the design and implementation of the integrated radio tomographic imaging (RTI) interface for radio signal strength (RSS) data obtained from a wireless imaging sensor network (WISN) (b) the use of model-driven methods to determine the extent of regularization to be applied to reconstruct images from the RSS data, and (c) preliminary study of the performance of the network.
This paper demonstrates methods to select and apply regularization to the linear least-squares model
formulation of the radio tomographic imaging (RTI) problem. Typically, the RTI inverse problem of
image reconstruction is ill-conditioned due to the extremely small singular values of the weight matrix
which relates the link signal strengths to the voxel locations of the obstruction. Regularization is included
to offset the non-invertible nature of the weight matrix by adding a regularization term such as the matrix
approximation of derivatives in each dimension based on the difference operator. This operation yields a
smooth least-squares solution for the measured data by suppressing the high energy or noise terms in the
derivative of the image. Traditionally, a scalar weighting factor of the regularization matrix is identified by
trial and error (adhoc) to yield the best fit of the solution to the data without either excessive smoothing or
ringing oscillations at the boundaries of the obstruction. This paper proposes new scalar and vector
regularization methods that are automatically computed based on the weight matrix. Evidence of the
effectiveness of these methods compared to the preset scalar regularization method is presented for
stationary and moving obstructions in an RTI wireless sensor network. The variation of the mean square
reconstruction error as a function of the scalar regularization is calculated for known obstructions in the
network. The vector regularization procedure based on selective updates to the singular values of the
weight matrix attains the lowest mean square error.
Computing architectures to process image data and optimize an objective criterion are identified. One such
objective criterion is the energy in the error function. The data is partitioned and the error function is
optimized in stages. Each stage consists of identifying an active partition and performing the optimization
with the data in this partition. The other partitions of the data are inactive i.e. maintain their current values.
The optimization progresses by switching between the currently active partition and the remaining inactive
partitions. In this paper, sequential and parallel update procedures within the active partition are presented.
These procedures are applied to retrieve image data from linearly degraded samples. In addition, the local
gradient of the error functional is estimated from the observed image data using simple linear convolution
operations. This optimization process is effective when the dimensions of the data and the number of
partitions increase. The purpose of developing such data processing strategies is to emphasize the
conservation of resources such as available bandwidth, computations, and storage in present day Webbased
technologies and multimedia information transfer.
Block-based discrete transform domain algorithms are developed to retrieve information from digital image data. Specifically, discrete, real, and circular Fourier transforms of the data blocks are filtered by coefficients chosen in the discrete frequency domain to improve feature detection. In this paper, the proposed approach is applied to improve the identification of edge discontinuities in digital image data.
Proc. SPIE. 5558, Applications of Digital Image Processing XXVII
KEYWORDS: Digital image processing, Data storage, Image segmentation, Image processing, Digital filtering, Image restoration, Data processing, Digital imaging, Algorithm development, Computer architecture
Recursive partitioned architectures in the spatial domain and block-based discrete transform domain algorithms are developed to retrieve information from digital image data. In the former case, the data is represented in forms which facilitate efficient optimization of the objective criterion. In the latter case, discrete, real, and circular Fourier transforms of the data blocks are filtered by coefficients chosen in the discrete frequency domain. This has led to improvements in the feature detection and localization process. In this paper, recursive and iterative approaches are applied to achieve the restoration of digital images from linearly degraded samples. In addition, block-based algorithms are employed to segment images.
Image segmentation plays a crucial role in detecting cancerous lesions in breast images. Typically, the images obtained are large in dimension and it will take considerable time to run traditional image segmentation algorithms to detect and localize lesions. To increase the efficiency of the detection process, this paper develops an efficient image segmentation algorithm which limits its attention to regions where there is the possibility of lesions to exist. The image segmentation algorithm is then applied to these regions to find a threshold value. There are three primary objectives of this paper. First, to design and implement a region of interset algorithm known as the Ranking algorithm. Secondly, to identify whether the regions detected are linked using the Linkage algorithm. Thirdly, to apply the image segmentation algorithm (Otsu algorithm) to these regions to obtain a threshold value. This threshold value is then used for global image segmentation.
Proc. SPIE. 5009, Visualization and Data Analysis 2003
KEYWORDS: Image processing algorithms and systems, Signal to noise ratio, Breast, Human-machine interfaces, Detection and tracking algorithms, Visualization, Cameras, Image segmentation, Spatial resolution, Algorithm development
The identification and localization of lesions in scintimammography breast images is a crucial stage in the early detection of cancer. Scintimammography breast images are obtained using a small, high-resolution breast-specific Gamma Camera (e.g. LumaGEMTM Gamma Ray Camera, Gamma Medica Instruments, Northridge, CA). The resulting images contain information about possible lesions but they are very noisy. This requires a robust image segmentation algorithm to accurately contour the region should it exist. The algorithm must perform robust localization, minimize the misclassifications, and lead to efficient practical implemetations despite the influence of blurring and the presence of noise. This paper discusses and implements a robust spatial domain algorithm known as the Otsu algorithm for automatic selection of threshold level from the image histogram and to detect and contour objects/regions in grayscale digital images. Specifically, this paper develops the algorithm that is used to identify cancerous lesions in breast images. There are two primary objectives of this paper. First, to design and implement a contour detection algorithm suitable for the constraints posed by scintimammography breast images, and secondly, to provide the physician with a Graphical User Interface (GUI) which facilitates the visualization and classification of the images.
Threshold binary networks of the discrete Hopfield-type lead to the efficient retrieval of the regularized least-squares (LS) solution in certain inverse problem formulations. Partitions of these networks are identified based on forms of representation of the data. The objective criterion is optimized using sequential and parallel updates on these partitions. The algorithms consist of minimizing a suboptimal objective criterion in the currently active partition. Once the local minima is attained, an inactive partition is chosen to continue the minimization. This strategy is especially effective when substantial data must be processed by resources which are constrained either in space or available bandwidth.
Proc. SPIE. 4665, Visualization and Data Analysis 2002
KEYWORDS: Signal to noise ratio, Edge detection, Detection and tracking algorithms, Image processing, Digital filtering, Error analysis, Linear filtering, Image filtering, Electronic filtering, Bandpass filters
Edges in grayscale digital imagery are detected by localizing the zero crossings of filtered data. To achieve this objective, truncated time or frequency sampled forms (TSF/FSF) of the Laplacian-of-Gaussian (LOG) filter are employed in the transform domain. Samples of the image are transformed using the discrete symmetric cosine transform (DSCT) prior to adaptive filtering and the isolation of zero crossings. This paper evaluates an adaptive block-wise filtering procedure based on the FSF of the LOG filter which diminishes the edge localization error, emphasizes the signal-to-noise ratio (SNR) around the edge, and extends easily to higher dimensions. Theoretical expressions and applications to digital images are presented.
Proc. SPIE. 4115, Applications of Digital Image Processing XXIII
KEYWORDS: Signal to noise ratio, Edge detection, Detection and tracking algorithms, Digital filtering, Linear filtering, Gaussian filters, Image filtering, Electronic filtering, Aluminium phosphide, Bandpass filters
The detection of edges in digital imagery based on localizing the zero crossings of filtered image data has been investigated. Truncated time or frequency sampled forms (TSF/FSF) of the Laplacian of Gaussian (LOG) filter are employed in the transform domain. Samples of the image are transformed using the discrete symmetric cosine transform (DSCT) prior to adaptive filtering and the isolation of zero crossings. The DSCT facilitates both control of the edge localization accuracy as well as modular implementation. The adaptive strategy for accepting/rejecting edge transitions at the appropriate locations in the image is based on estimates of the local gradient. This paper evaluates block-based filtering procedures to identify edges in terms of achievable edge localization, signal-to-noise ratio (SNR) around the edge, and computational benefits.
The recursive use of thresholded binary networks of the Hopfield-type lead to the efficient retrieval of the regularized least-squares (LS) solution in certain inverse problem formulations. This strategy is especially effective when substantial data must be processed by resources which are constrained either in space or available bandwidth. Partitions of the network are identified based on forms of representation of the data. The objective criterion is optimized using sequential and parallel updates on these partitions. The algorithms consist of minimizing a suboptimal objective criterion in the currently active partition. Once the local minima is attained, an inactive partition is chosen to continue the minimization. An application to digital image restoration is considered.
Several engineering applications are concerned with the accurate and efficient identification of the least-squares (LS) solution. The computational and storage requirements to determine the LS solution become prohibitively large as the dimensions of the problem grow. This paper develops an algorithm which receives the least squares solution based on a steepest descent formulation. Among the advantages of this approach are improvements in computational and resource management, and ease of hardware implementation. The gradient matrix is evaluated using 2-D linear convolutions and an in- place update strategy. An iterative procedure is outlined and the regularized and unregularized LS solutions can be recovered. The extent of regularization is suitably controlled and imposes some constraints on the step size for steepest descent. The proposed approach is examined in the context of digital image restoration from spatially invariant linear blur degradation and compared with alternate strategies performing LS recovery.
Edges in digital imagery can be identified from the zero- crossings of Laplacian of Gaussian (LOG) filtered images. Time or frequency-sampled LOG filters have been developed for the detection and localization of edges in digital image data. The image is decomposed into overlapping subblocks and processed in the transform domain. Adaptive algorithms are developed to minimize spurious edge classifications. In order to achieve accurate and efficient implementations, the discrete symmetric cosine transform of the input data is employed in conjunction with adaptive filters. The adaptive selection of the filter coefficients is based on the gradient criterion. For instance, in the case of the frequency-sampled LOG filter, the filter parameter is systemically varied to force the rejection of false or weak edges. In addition, the proposed algorithms easily extend to higher dimensions. This is useful where 3D medical image data containing edge information has been corrupted by noise. This paper employs isotropic and non-isotropic filters to track edges in such images.
Edges or perceptible intensity transitions in digital imagery are identified from the zero-crossings of Laplacian of Gaussian (LOG) filtered images. Time or frequency-sampled LOG filters have been developed for the detection of edges in digital image data. The image is decomposed into overlapping subblocks and processed in the transform domain. In order to achieve accurate and efficient implementations, the discrete symmetric cosine transform (DSCT) of the input data is employed in conjunction with adaptive filters. The adaptive selection of the filter coefficients is based on the gradient criterion. For instance, in the case of the frequency-sampled LOG filter, the filter parameter is systematically varied to force the rejection of spurious edge classifications. In addition, the proposed algorithms easily extend to higher dimensions. This is useful where 3-D medical image data containing edge information has been corrupted by noise. This paper employs isotropic and non-isotropic filters to track edges in such images. The algorithm is implemented in 1-D, 2-D and 3-D and suitable examples will be presented.
This paper presents discrete thresholded binary networks of the Hopfield-type as feasible configurations to perform image restoration with regularization. The typically large scale nature of image data processing is handled by partitioning these structures and adopting sequential or parallel update strategies on the partitions one at a time. Among the advantages of such architectures are the ability to efficiently utilize space- bandwidth constrained resources, obviate the need for zero self- feedback connections in sequential procedures and diminish the likelihood of limit cycling in parallel approaches. In the case of image data corrupted by blurring and AWGN, the least squares solution is attained in stages by switching between partitions to force energy descent. Two forms of partitioning have been discussed. The partial neuron decomposition is seen to be more efficient than the partial data strategy. Further, parallel update procedures are more practical from an electro-optical standpoint. The paper demonstrates the viability of these architectures through suitable examples.
An adaptive approach to edge detection using the transform domain and bandpass filtering is discussed. The method is also extended to 3-D and shown to yield better edge maps. The discrete symmetric cosine transform (DSCT) is shown to be the best transform for accurate edge detection. The theory is first discussed in 1-D. Then, the adaptive algorithm utilizing both the gradient and the Laplacian information is developed in 2-D and 3-D. Computer simulations with regular scenes and magnetic resonance imaging images are provided. The extension of the method to 3-D leads to improved noise immunity and better edge contours.