PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 9817, including the Title Page, Copyright information, Table of Contents, Introduction (if any), and Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper a new block based lossy image compression technique which is using rank reduction of the image and wavelet difference reduction (WDR) technique, is proposed. Rank reduction is obtained by applying singular value decomposition (SVD). The input image is divided into blocks of equal sizes after which quantization by SVD is carried out on each block followed by WDR technique. Reconstruction is carried out by decompressing each blocks bit streams and then merging all of them to obtain the decompressed image. The visual and quantitative experimental results of the proposed image compression technique are shown and also compared with those of the WDR technique and JPEG2000. From the results of the comparison, the proposed image compression technique outperforms the WDR and JPEG2000 techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we give an efficient method for obtaining interpolated images with much better PSNR compared to Bicubic interpolation scheme using a new technique called targeted image enhancement (TIE) to improve PSNR.
Study of typical images decimated by a factor of 2 and regenerated with Bicubic interpolation, leads to significant PSNR degradation and the significant errors are seen around the object edges as all interpolation techniques lead to some form of smoothing of the interpolated images. Our targeted image enhancement technique compensates for the losses in the distorted pixels in the interpolated image. One of the challenges in such an approach is to determine locations of pixels in the interpolated image which are subjected to high distortion. We have given a technique which extracts a location map of the pixels which face high distortion in the interpolated image. This location map can be generated by both the decoder and the encoder without the need for sending location information. Information needed for correcting the distorted pixels is sent as additional information along with the decimated image.
Simulation studies indicate that an average PSNR of 28 dB after standard bicubic can be improved to 32 dB, on an average, with targeted enhancement. This improvement is attained with a data over head of only 3%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As one of classic methods of frequency domain based saliency detection, Spectral residual (SR) method has shown several advantages. However, it usually produces higher saliency values at object edges instead of generating maps that uniformly cover the whole object, which results from failing to exploit all the spatial frequency content of the original image. The Two-Dimensional Fractional Fourier transform (2D-FRFT) is a generalized form of the traditional Fourier Transform (FT) which can abstract more meaningful information of the image under certain conditions. Based on this property, we propose a new method which detects the salient region based on 2D-FRFT domain. Moreover, we also use Hough transform detection and a band-pass filter to refine the saliency map. We conduct experiments on a common used dataset: MSRA. The proposed method is compared with several other saliency detection methods and shown to achieve superior result.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image inpainting is to restore a damaged image with missing information – a fundamental problem and a hot research area in image processing. Many approaches, both geometry oriented and texture oriented, have been proposed on inpainting such as total variation (TV), Criminisi algorithm, etc. However, these approaches suffer from either limitations such as only suitable for small areas (cracks), staircase effect (discontinuity), or inefficient (time-consuming) to search the best matched patch (for filling-in). In this paper we propose a novel approach based on partial differential equation (PDE) and isophotes direction, named as Isophotes-TV-H-1. A corrupted image is first decomposed into two parts: the cartoon (smooth parts and edges of the image) and the texture. The cartoon part is inpainted through Isophotes- TV-H-1 while the texture part is done by an enhanced Criminisi algorithm which reduces the searching time for match and gives more reasonable match patches. The results of experiments on several images have demonstrated that, compared to existing methods, the proposed solution can recover the texture (of the damaged region) better, suppress error propagation and solve the problem of intensity discontinuity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a technique for image enhancement based on proposed edge boosting algorithm to reconstruct high quality image from a single low resolution image is described. The difficulty in single-image super-resolution is that the generic image priors resided in the low resolution input image may not be sufficient to generate the effective solutions. In order to achieve a success in super-resolution reconstruction, efficient prior knowledge should be estimated. The statistics of gradient priors in terms of priority map based on separable gradient estimation, maximum likelihood edge estimation, and local variance are introduced. The proposed edge boosting algorithm takes advantages of these gradient statistics to select the appropriate enhancement weights. The larger weights are applied to the higher frequency details while the low frequency details are smoothed. From the experimental results, the significant performance improvement quantitatively and perceptually is illustrated. It can be seen that the proposed edge boosting algorithm demonstrates high quality results with fewer artifacts, sharper edges, superior texture areas, and finer detail with low noise.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The local binary pattern (LBP) has been proved to be significantly useful and competitive in the application of blind image quality assessment (BIQA). However, LBP is short of magnitude information, limiting its performance to some extent. In this paper, we introduce a novel BIQA method, which uses the proposed generalized local ternary pattern (GLTP) to measure structural degradation. By introducing multi-threshold for the gray-level differences, GLTP can provide more discriminative and stable features. Moreover, GLTP contains magnitude information computed by using the magnitudes of horizontal and vertical first-order derivatives. Experimental results on two subject-rated databases demonstrate that the proposed method outperforms state-of-the-art BIQA models, as well as several representative full reference image quality assessment methods for various types of distortions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As the resolution of TV significantly increased, content consumers have become increasingly sensitive to the subtlest defect in TV contents. This rising standard in quality demanded by consumers has posed a new challenge in today’s context where the tape-based process has transitioned to the file-based process: the transition necessitated digitalizing old archives, a process which inevitably produces errors such as disordered pixel blocks, scattered white noise, or totally missing pixels. Unsurprisingly, detecting and fixing such errors require a substantial amount of time and human labor to meet the standard demanded by today’s consumers. In this paper, we introduce a novel, automated error restoration algorithm which can be applied to different types of classic errors by utilizing adjacent images while preserving the undamaged parts of an error image as much as possible. We tested our method to error images detected from our quality check system in KBS(Korean Broadcasting System) video archive. We are also implementing the algorithm as a plugin of well-known NLE(Non-linear editing system), which is a familiar tool for quality control agent.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes a new evaluation criterion for band selection for hyperspectral imagery. The combination of information and class separability is used to be as a new evaluation criterion, at the same time, the correlation between bands is used as a constraint condition. In addition, the game theory is introduced into the band selection to coordinate the potential conflict of search the optimal band combination using information and class separability these two evaluation criteria. The experimental results show that the proposed method is effective on AVIRIS hyperspectral data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we describe a Hardware Accelerator (HWA) for fast recursive approximation of separable convolution with exponential function. This filter can be used in many Image Processing (IP) applications, e.g. depth-dependent image blur, image enhancement and disparity estimation. We have adopted this filter RTL implementation to provide maximum throughput in constrains of required memory bandwidth and hardware resources to provide a power-efficient VLSI implementation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The classical “Shape Distribution D2” algorithm takes the distance between two random points on a surface of CAD model as statistical features, and based on that it generates a feature vector to calculate the dissimilarity and achieve the retrieval goal. This algorithm has a simple principle, high computational efficiency and can get a better retrieval results for the simple shape models. Based on the analysis of D2 algorithm’s shape distribution curve, this paper enhances the algorithm’s descriptive ability for a model’s overall shape through the statistics of the angle between two random points’ normal vectors, especially for the distinctions between the model’s plane features and curved surface features; meanwhile, introduce the ratio that a line between two random points cut off by the model’s surface to enhance the algorithm’s descriptive ability for a model’s detailed features; finally, integrating the two shape describing methods with the original D2 algorithm, this paper proposes a new method based the hierarchical multi-features. Experimental results showed that this method has bigger improvements and could get a better retrieval results compared with the traditional 3D CAD model retrieval method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Spectral unmixing technique is one of the key techniques to identify and classify the material in the hyperspectral image processing. A novel robust spectral unmixing method based on nonnegative matrix factorization(NMF) is presented in this paper. This paper used an edge-preserving function as hypersurface cost function to minimize the nonnegative matrix factorization. To minimize the hypersurface cost function, we constructed the updating functions for signature matrix of end-members and abundance fraction respectively. The two functions are updated alternatively. For evaluation purpose, synthetic data and real data have been used in this paper. Synthetic data is used based on end-members from USGS digital spectral library. AVIRIS Cuprite dataset have been used as real data. The spectral angle distance (SAD) and abundance angle distance(AAD) have been used in this research for assessment the performance of proposed method. The experimental results show that this method can obtain more ideal results and good accuracy for spectral unmixing than present methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An improved adaptive intensity-hue-saturation (IHS) method is proposed for image fusion in this paper based on the adaptive IHS (AIHS) method and its improved method(IAIHS). Through improved method, the weighting matrix, which decides how many spatial details in the panchromatic (Pan) image should be injected into the multispectral (MS) image, is defined on the basis of the linear relationship of the edges of Pan and MS image. At the same time, a modulation parameter t is used to balance the spatial resolution and spectral resolution of the fusion image. Experiments showed that the improved method can improve spectral quality and maintain spatial resolution compared with the AIHS and IAIHS methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hyperspectral imagery has been widely used in various fields for its rich amount of feature information. The quality of hyperspectral imagery has been set higher requirements. As a result of the limitation of imaging semiconductor technology, hyperspectral image resolution needs to be improved by a signal processing method. This paper presents a recovery algorithm of spatial and spectral coordinate super-resolution of hyperspectral image based on redundant dictionary. Compared with the traditional image super-resolution restoration algorithm, the super-resolution restoration in the spectral of hyperspectral image was added on the basis of spatial resolution improvement. The original constraint was added in the algorithm and edges of reconstructed image were sharpened with the Maximum a Posterior. The results show this algorithm can effectively improve spatial and spectral resolution of the hyperspectral imagery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The detection of brain tumor from MR images is very significant for medical diagnosis and treatment. However, the existing methods are mostly based on manual or semiautomatic segmentation which are awkward when dealing with a large amount of MR slices. In this paper, a new fully automatic method for the segmentation of brain tumors in MR slices is presented. Based on the hypothesis of the symmetric brain structure, the method improves the interactive GrowCut algorithm by further using the bounding box algorithm in the pre-processing step. More importantly, local reflectional symmetry is used to make up the deficiency of the bounding box method. After segmentation, 3D tumor image is reconstructed. We evaluate the accuracy of the proposed method on MR slices with synthetic tumors and actual clinical MR images. Result of the proposed method is compared with the actual position of simulated 3D tumor qualitatively and quantitatively. In addition, our automatic method produces equivalent performance as manual segmentation and the interactive GrowCut with manual interference while providing fully automatic segmentation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Segmentation of hippocampus in the brain is one of a major challenge in medical image segmentation due to its’ imaging characteristics, with almost similar intensity between another adjacent gray matter structure, such as amygdala. The intensity similarity has causes the hippocampus to have weak or fuzzy boundaries. With this main challenge being demonstrated by hippocampus, a segmentation method that relies on image information alone may not produce accurate segmentation results. Therefore, it is needed an assimilation of prior information such as shape and spatial information into existing segmentation method to produce the expected segmentation. Previous studies has widely integrated prior information into segmentation methods. However, the prior information has been utilized through a global manner integration, and this does not reflect the real scenario during clinical delineation. Therefore, in this paper, a locally integrated prior information into a level set model is presented. This work utilizes a mean shape model to provide automatic initialization for level set evolution, and has been integrated as prior information into the level set model. The local integration of edge based information and prior information has been implemented through an edge weighting map that decides at voxel level which information need to be observed during a level set evolution. The edge weighting map shows which corresponding voxels having sufficient edge information. Experiments shows that the proposed integration of prior information locally into a conventional edge-based level set model, known as geodesic active contour has shown improvement of 9% in averaged Dice coefficient.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The purpose of this paper is the study of efficient methods for image binarization. The objective of the work is the metro maps binarization. the goal is to binarize, avoiding noise to disturb the reading of subway stations. Different methods have been tested. By this way, a method given by Otsu gives particularly interesting results. The difficulty of the binarization is the choice of this threshold in order to reconstruct. Image sticky as possible to reality. Vectorization is a step subsequent to that of the binarization. It is to retrieve the coordinates points containing information and to store them in the two matrices X and Y. Subsequently, these matrices can be exported to a file format 'CSV' (Comma Separated Value) enabling us to deal with them in a variety of software including Excel. The algorithm uses quite a time calculation in Matlab because it is composed of two "for" loops nested. But the "for" loops are poorly supported by Matlab, especially in each other. This therefore penalizes the computation time, but seems the only method to do this.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we have addressed the issue of over-segmented regions produced in watershed by merging the regions using global feature. The global feature information is obtained from clustering the image in its feature space using Fuzzy C-Means (FCM) clustering. The over-segmented regions produced by performing watershed on the gradient of the image are then mapped to this global information in the feature space. Further to this, the global feature information is optimized using Simulated Annealing (SA). The optimal global feature information is used to derive the similarity criterion to merge the over-segmented watershed regions which are represented by the region adjacency graph (RAG). The proposed method has been tested on digital brain phantom simulated dataset to segment white matter (WM), gray matter (GM) and cerebrospinal fluid (CSF) soft tissues regions. The experiments showed that the proposed method performs statistically better, with average of 95.242% regions are merged, than the immersion watershed and average accuracy improvement of 8.850% in comparison with RAG-based immersion watershed merging using global and local features.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Immunohistochemical (IHC) staining is commonly used for detecting cells in microscopy. It is used for analyzing many types of diseases, e.g. breast cancer. Dispersion problem often exist at cell staining which will affect the accuracy of automatic counting. In this paper, we introduce a new method to overcome this problem. Otsu’s thresholding method is first applied to exclude the background, so that only cells with dispersed staining are left at foreground, and then refinement will be applied by local adaptive thresholding method according to the irregularity index of the segmented shape at foreground. The segmentation results are also compared to the refinement results using Otsu’s thresholding method. Cell classification based on the shape and color indices obtained from the segmentation result is applied to determine the cell condition into normal, abnormal and suspected abnormal cases.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Visceral Leishmaniasis is a parasitic disease that affects liver, spleen and bone marrow. According to World Health Organization report, definitive diagnosis is possible just by direct observation of the Leishman body in the microscopic image taken from bone marrow samples. We utilize morphological and CV level set method to segment Leishman bodies in digital color microscopic images captured from bone marrow samples. Linear contrast stretching method is used for image enhancement and morphological method is applied to determine the parasite regions and wipe up unwanted objects. Modified global and local CV level set methods are proposed for segmentation and a shape based stopping factor is used to hasten the algorithm. Manual segmentation is considered as ground truth to evaluate the proposed method. This method is tested on 28 samples and achieved 10.90% mean of segmentation error for global model and 9.76% for local model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Tendiopathy is a popular clinical issue in recent years. In most cases like trigger finger or tennis elbow, the pathology change can be observed under H and E stained tendon microscopy. However, the qualitative analysis is too subjective and thus the results heavily depend on the observers. We develop an automatic segmentation procedure which segments and counts the nuclei in H and E stained tendon microscopy fast and precisely. This procedure first determines the complexity of images and then segments the nuclei from the image. For the complex images, the proposed method adopts sampling-based thresholding to segment the nuclei. While for the simple images, the Laplacian-based thresholding is employed to re-segment the nuclei more accurately. In the experiments, the proposed method is compared with the experts outlined results. The nuclei number of proposed method is closed to the experts counted, and the processing time of proposed method is much faster than the experts’.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a simple and robust algorithm is proposed for iris segmentation. The proposed method consists of two steps. In first step, iris and pupil is segmented using Robust Spatial Kernel FCM (RSKFCM) algorithm. RSKFCM is based on traditional Fuzzy-c-Means (FCM) algorithm, which incorporates spatial information and uses kernel metric as distance measure. In second step, small eigenvalue transformation is applied to localize iris boundary. The transformation is based on statistical and geometrical properties of the small eigenvalue of the covariance matrix of a set of edge pixels. Extensive experimentations are carried out on standard benchmark iris dataset (viz. CASIA-IrisV4 and UBIRIS.v2). We compared our proposed method with existing iris segmentation methods. Our proposed method has the least time complexity of O(n(i+p)) . The result of the experiments emphasizes that the proposed algorithm outperforms the existing iris segmentation methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In a camera-based engagement level recognition, a face is an important factor because cues mainly come from a face, which is affected from a distance between a camera and a user. In this paper, we present an automatic engagement level recognition method showing stable performance regardless of a distance between a camera and a user. We show a detailed process about getting a distance-invariant cue and compare its performance with and without the process. We also adopt a temporal pyramid structure to extract temporal statistical feature and present a voting method for an engagement level estimation. We show the results and the analysis using the database acquired in the real environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Expression change is the major cause of local plastic deformation of the facial surface. The intra-class differences with large expression change somehow are larger than the inter-class differences as it's difficult to distinguish the same individual with facial expression change. In this paper, an expression-robust 3D face recognition method is proposed by learning expression deformation model. The expression of the individuals on the training set is modeled by principal component analysis, the main components are retained to construct the facial deformation model. For the test 3D face, the shape difference between the test and the neutral face in training set is used for reconstructing the expression change by the constructed deformation model. The reconstruction residual error is used for face recognition. The average recognition rate on GavabDB and self-built database reaches 85.1% and 83%, respectively, which shows strong robustness for expression changes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Vehicle model matching problem from the side view is a problem meets the practical needs of actual users, but less focus by researchers. We propose a improved feature space-based algorithm for this problem. The algorithm combines the various advantages of some classic algorithms, and effectively combining global and local feature, eliminate data redundancy and improve data divisibility. And finally complete the classification by quick and efficient KNN. The real scene test results show that the proposed method is robust, accurate, insensitive to external factors, adaptable to large angle deviations, and can be applied to a formal application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image classification is one of the most challenging tasks in computer vision and a general multiclass classifier could solve many different tasks in image processing. Classification is usually done by shallow learning for predefined objects, which is a difficult task and very different from human vision, which is based on continuous learning of object classes and one requires years to learn a large taxonomy of objects which are not disjunct nor independent. In this paper I present a system based on Google image similarity algorithm and Google image database, which can classify a large set of different objects in a human like manner, identifying related classes and taxonomies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Moving foreground detection can be used for the intelligent surveillance system and computer vision as an important step for many applications. Previous researchers have developed many different moving foreground detection technologies, such as background subtraction and optical flow. However, as far as we knew, there was few literature investigated ensemble method in integrate with various foreground detection technologies in real-time. In this paper, we present a new approach inspired from the ensemble system of machine learning to detect moving foreground by using weighted matrix with spatial characteristics. Furthermore, the weighted values can be automatically scaled over time for optimal flexibility and parameterization in our method. The experimental results demonstrate that the proposed method can not only provide compared performance with the state-of-the-art methods, but also satisfy real-time applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
IC marking provides information about the integrated circuit chips, such as product function and classification. So IC marking inspection is one of the essential processes in semiconductor fabrication. A real-time IC chip marking defect inspection method is presented in this paper. The method comprises the following steps: chip position detection, characters segmentation, feature extraction and classification. The extracted features are used in a back propagation neural network for classifying the types of marking errors such as illegible characters, missing characters and misprinted characters. Character segmentation is an essential part of the inspection method. It is a considerable challenge to segment touching and broken characters correctly, due to uneven illumination, motion blur, as well as problems in the printing process. In order to segment the characters rapidly and accurately, a novel approach for character segmentation based on vertical projection and the character features is proposed. Experiments using a TSSOP20 packaging chip demonstrate that our method can inspect an IC marking with 17 different characters in just 130ms. The system achieves a maximum recognition rate of 98.5%. As a result, it is an ideal solution for a real-time IC marking recognition and defects inspection system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a new fast frequency domain method based on Discrete Wavelet Transform and Fast Fourier Transform has been implemented for the determination of the skew angle in a document image. Firstly, image size reduction is done by using two-dimensional Discrete Wavelet Transform and then skew angle is computed using Fast Fourier Transform. Skew angle error is almost negligible. The proposed method is experimented using a large number of documents having skew between -90° and +90° and results are compared with Moments with Discrete Wavelet Transform method and other commonly used existing methods. It has been determined that this method works more efficiently than the existing methods. Also, it works with typed, picture documents having different fonts and resolutions. It overcomes the drawback of the recently proposed method of Moments with Discrete Wavelet Transform that does not work with picture documents.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to realize accurate detection for small dim infrared maritime target, this paper proposes a target detection algorithm based on local peak detection and pipeline-filtering. This method firstly extracts some suspected targets through local peak detection and removes most of non-target peaks with self-adaptive threshold process. And then pipeline-filtering is used to eliminate residual interferences so that only real target can be retained. The experiment results prove that this method has high performance on target detection, and its missing alarm rate and false alarm rate can basically meet practical requirements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper presents the algorithm for detection and clustering of feature in aerial photographs based on artificial neural networks. The presented approach is not focused on the detection of specific topographic features, but on the combination of general features analysis and their use for clustering and backward projection of clusters to aerial image. The basis of the algorithm is a calculation of the total error of the network and a change of weights of the network to minimize the error. A classic bipolar sigmoid was used for the activation function of the neurons and the basic method of backpropagation was used for learning. To verify that a set of features is able to represent the image content from the user's perspective, the web application was compiled (ASP.NET on the Microsoft .NET platform). The main achievements include the knowledge that man-made objects in aerial images can be successfully identified by detection of shapes and anomalies. It was also found that the appropriate combination of comprehensive features that describe the colors and selected shapes of individual areas can be useful for image analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Convolutional Neural Networks (CNN) have dramatically boosted the performance of various computer vision tasks except visual tracking due to the lack of training data. In this paper, we pre-train a deep CNN offline to classify the 1 million images from 256 classes with very leaky non-saturating neurons for training acceleration, which is transformed to a discriminative classifier by adding an additional classification layer. In addition, we propose a novel approach for combining increasingly our CNN classifiers in a “cascade” structure through a modification of the AdaBoost framework, and then transfer the selected discriminative features from the ensemble of CNN classifiers to the robust visual tracking task, by updating online to robustly discard the background regions from promising object-like region to cope with appearance changes of the target. Extensive experimental evaluations on an open tracker benchmark demonstrate outstanding performance of our tracker by improving tracking success rate and tracking precision on an average of 9.2% and 13.9% at least over other state-of-the-art trackers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Visual tracking is a challenging problem in computer vision. Recent years, significant numbers of trackers have been proposed. Among these trackers, tracking with dense spatio-temporal context has been proved to be an efficient and accurate method. Other than trackers with online trained classifier that struggle to meet the requirement of real-time tracking task, a tracker with spatio-temporal context can run at hundreds of frames per second with Fast Fourier Transform (FFT). Nevertheless, the performance of the tracker with Spatio-temporal context relies heavily on the learning rate of the context, which restricts the robustness of the tracker.
In this paper, we proposed a tracking method with dual spatio-temporal context trackers that hold different learning rate during tracking. The tracker with high learning rate could track the target smoothly when the appearance of target changes, while the tracker with low learning rate could percepts the occlusion occurring and continues to track when the target starts to emerge again. To find the target among the candidates from these two trackers, we adopt Normalized Correlation Coefficient (NCC) to evaluate the confidence of each sample. Experimental results show that the proposed algorithm performs robustly against several state-of-the-art tracking methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a stereo vision-based pedestrian detection using multiple features for automotive application. The disparity map from stereo vision system and multiple features are utilized to enhance the pedestrian detection performance. Because the disparity map offers us 3D information, which enable to detect obstacles easily and reduce the overall detection time by removing unnecessary backgrounds. The road feature is extracted from the v-disparity map calculated by the disparity map. The road feature is a decision criterion to determine the presence or absence of obstacles on the road. The obstacle detection is performed by comparing the road feature with all columns in the disparity. The result of obstacle detection is segmented by the bird’s-eye-view mapping to separate the obstacle area which has multiple objects into single obstacle area. The histogram-based clustering is performed in the bird's-eye-view map. Each segmented result is verified by the classifier with the training model. To enhance the pedestrian recognition performance, multiple features such as HOG, CSS, symmetry features are utilized. In particular, the symmetry feature is proper to represent the pedestrian standing or walking. The block-based symmetry feature is utilized to minimize the type of image and the best feature among the three symmetry features of H-S-V image is selected as the symmetry feature in each pixel. ETH database is utilized to verify our pedestrian detection algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Action recognition is a very challenging task in the field of real-time video surveillance. The traditional models on action recognition are constructed of Spatial-temporal features and Bag-of-Feature representations. Based on this model, current research work tends to introduce dense sampling to achieve better performance. However, such approaches are computationally intractable when dealing with large video dataset. Hence, there are some recent works focused on feature reduction to speed up the algorithm without reducing accuracy.
In this paper, we proposed a novel selective feature sampling strategy on action recognition. Firstly, the optical flow field is estimated throughout the input video. And then the sparse FAST (Features from Accelerated Segment Test) points are selected within the motion regions detected by using the optical flows on the temporally down-sampled image sequences. The selective features, sparse FAST points, are the seeds to generate the 3D patches. Consequently, the simplified LPM (Local Part Model) which greatly speeds up the model is formed via 3D patches. Moreover, MBHs (Motion Boundary Histograms) calculated by optical flows are also adopted in the framework to further improve the efficiency. Experimental results on UCF50 dataset and our artificial dataset show that our method could reach more real-time effect and achieve a higher accuracy compared with the other competitive methods published recently.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Object tracking is a challenging task in computer vision. Most state-of-the-art methods maintain an object model and update the object model by using new examples obtained incoming frames in order to deal with the variation in the appearance. It will inevitably introduce the model drift problem into the object model updating frame-by-frame without any censorship mechanism. In this paper, we adopt a multi-expert tracking framework, which is able to correct the effect of bad updates after they happened such as the bad updates caused by the severe occlusion. Hence, the proposed framework exactly has the ability which a robust tracking method should process. The expert ensemble is constructed of a base tracker and its formal snapshot. The tracking result is produced by the current tracker that is selected by means of a simple loss function. We adopt an improved compressive tracker as the base tracker in our work and modify it to fit the multi-expert framework. The proposed multi-expert tracking algorithm significantly improves the robustness of the base tracker, especially in the scenes with frequent occlusions and illumination variations. Experiments on challenging video sequences with comparisons to several state-of-the-art trackers demonstrate the effectiveness of our method and our tracking algorithm can run at real-time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Currently, high-speed vision platforms are widely used in many applications, such as robotics and automation industry. However, a personal computer (PC) whose over-large size is not suitable and applicable in compact systems is an indispensable component for human-computer interaction in traditional high-speed vision platforms. Therefore, this paper develops an embedded real-time and high-speed vision platform, ER-HVP Vision which is able to work completely out of PC. In this new platform, an embedded CPU-based board is designed as substitution for PC and a DSP and FPGA board is developed for implementing image parallel algorithms in FPGA and image sequential algorithms in DSP. Hence, the capability of ER-HVP Vision with size of 320mm x 250mm x 87mm can be presented in more compact condition. Experimental results are also given to indicate that the real-time detection and counting of the moving target at a frame rate of 200 fps at 512 x 512 pixels under the operation of this newly developed vision platform are feasible.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The broadening application of cone-beam Computed Tomography (CBCT) in medical diagnostics and nondestructive testing, necessitates advanced denoising algorithms for its 3D images. The block-matching and four dimensional filtering algorithm with adaptive variance (BM4D-AV) is applied to the 3D image denoising in this research. To optimize it, the key filtering parameters of the BM4D-AV algorithm are assessed firstly based on the simulated CBCT images and a table of optimized filtering parameters is obtained. Then, considering the complexity of the noise in realistic CBCT images, possible noise standard deviations in BM4D-AV are evaluated to attain the chosen principle for the realistic denoising. The results of corresponding experiments demonstrate that the BM4D-AV algorithm with optimized parameters presents excellent denosing effect on the realistic 3D CBCT images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent graphics processing units (GPUs) can process general-purpose applications as well as graphics applications with the help of various user-friendly application programming interfaces (APIs) supported by GPU vendors. Unfortunately, utilizing the hardware resource in the GPU efficiently is a challenging problem, since the GPU architecture is totally different to the traditional CPU architecture. To solve this problem, many studies have focused on the techniques for improving the system performance using GPUs. In this work, we analyze the GPU performance varying GPU parameters such as the number of cores and clock frequency. According to our simulations, the GPU performance can be improved by 125.8% and 16.2% on average as the number of cores and clock frequency increase, respectively. However, the performance is saturated when memory bottleneck problems incur due to huge data requests to the memory. The performance of GPUs can be improved as the memory bottleneck is reduced by changing GPU parameters dynamically.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we put forward a novel approach based on hierarchical teaching-and-learning-based optimization (HTLBO) algorithm for nonlinear camera calibration. This algorithm simulates the teaching-learning ability of teachers and learners of a classroom. Different from traditional calibration approach, the proposed technique can find the nearoptimal solution without the need of accurate initial parameters estimation (with only very loose parameter bounds). With the introduction of cascade of teaching, the convergence speed is rapid and the global search ability is improved. Results from our study demonstrate the excellent performance of the proposed technique in terms of convergence, accuracy, and robustness. The HTLBO can also be used to solve many other complex non-linear calibration optimization problems for its good portability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An effective way to accelerate the Finite-difference time-domain (FDTD) method is the use of a Graphic Processing Unit (GPU). This paper describes an implementation of the three dimensional FDTD method with CPML boundary condition on a Kepler (GK110) architecture GPU. We optimize the FDTD domain decomposition method on Kepler GPU. And then, several Kepler-based optimizations are studied and applied to the FDTD program. The optimized program achieved up to 270.9 times speedup compared to the CPU sequential version. The experiments show that 22.2% of the simulation time is saved compared to the GPU version without optimizations. The solution is also faster than previous works.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
High resolution satellite images play an important role in target detection application presently. This article focuses on the ship target detection from the high resolution panchromatic images. Taking advantage of geographic information such as the coastline vector data provided by NOAA Medium Resolution Coastline program, the land region is masked which is a main noise source in ship detection process. After that, the algorithm tries to deal with the cloud noise which appears frequently in the ocean satellite images, which is another reason for false alarm. Based on the analysis of cloud noise's feature in frequency domain, we introduce a windowed noise filter to get rid of the cloud noise. With the help of morphological processing algorithms adapted to target detection, we are able to acquire ship targets in fine shapes. In addition, we display the extracted information such as length and width of ship targets in a user-friendly way i.e. a KML file interpreted by Google Earth.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new mixed mode filter based on MDDCC (Modify Differential Difference Current Conveyor) is proposed, the structure of filter is simple, the circuit consist of only three active MDDCCs, five resistors and three grounded capacitors. The filter can realize the filter of current mode and voltage mode, which can realize the function of low pass biquad, band pass biquad and high pass biquad simultaneously. The computer simulation of PSPICE uses 0.18μm TSMC CMOS and the theoretical results are validated the proposed circuit.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image filtering is an important and fundamental issue in image processing pipelines and find itself a lot of applications in segmentation, salient features detection, colorization, stylization and so on. In recent years, several nonlinear filters aiming at edge-preserving smoothing has been proposed from different fields. However, none of these filters is perfect for all applications due to their own model assumption and solving strategy. In this paper, we give a brief introduction to several of them particularly from graphics field and comparison about their advantages and limitations through experiments. We look forward to offer an helpful starting point for researchers to select or improve them.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Speckle noise is a phenomenon inherent in any coherent imaging process and decreases the signal-to-noise ratio (SNR), which brings down the imaging quality. Speckle noise reduction is particularly important in the tissue harmonic imaging (THI) since it has the lower energy and the poorer SNR than the fundamental imaging (FI). Recently plane wave imaging (PWI) has been widely explored. Since the entire imaging region can be covered in one emission, the frame rate increases greatly. In PWI, speckle can be reduced by incoherently averaging images with different speckle patterns. Such images can be acquired by varying the angle from which a target is imaged (spatial compounding, SC) or by changing the spectrum of the pulse (frequency compounding, FC). In this paper we demonstrate here that each approach is only a partial solution and that combining them provides a better result than applying either approach separately. We propose a spatial-frequency compounding (SFC) method for THI. The new method brings a good speckle suppression result. To illustrate the performance of our method, experiments have been conducted on the simulated data. A nonlinear simulation platform based on the full-wave model is used in the harmonic imaging simulation. Results show that our method brings the SNR an improvement of up to 50% in comparison with the single frame HI while maintaining a far better performance in both terms of resolution and contrast than the FI. Similar results can be obtained from our further experiments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This study illustrates the spatio-temporal dynamics of urban growth and land use changes in Samara city, Russia from 1975 to 2015. Landsat satellite imageries of five different time periods from 1975 to 2015 were acquired and quantify the changes with the help of ArcGIS 10.1 Software. By applying classification methods to the satellite images four main types of land use were extracted: water, built-up, forest and grassland. Then, the area coverage for all the land use types at different points in time were measured and coupled with population data. The results demonstrate that, over the entire study period, population was increased from 1146 thousand people to 1244 thousand from 1975 to 1990 but later on first reduce and then increase again, now 1173 thousand population. Built-up area is also change according to population. The present study revealed an increase in built-up by 37.01% from 1975 to 1995, than reduce -88.83% till 2005 and an increase by 39.16% from 2005 to 2015, along with the increase in population, migration from rural areas owing to the economic growth and technological advantages associated with urbanization. Information on urban growth, land use and land cover change study is very useful to local government and urban planners for the betterment of future plans to sustainable development of the city.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As an important measuring method in velocity measuring field, Particle Image Velocimetry(PIV), which follows the principle of dividing the maximum displacement of tracer particles by the corresponding time, is applied more and more widely in various subjects, and the accuracy of which is influenced by the choice of the time delay to some extent. The existing PIV system usually chooses a fixed time delay, which could not meet the need of the application in measuring the vector of time varying flow field with a relatively high measuring accuracy. Considering the weakness of this, we introduce a new kind of adjustable frame-straddling image formation system for PIV application to improve the accuracy in this paper. The image formation system is implemented mainly because of two parts: a dual CCD camera system which is carefully designed to capture the frame-straddling image pairs of the flow field with an adjustable time delay controlled by the externally trigger signals, and an effective subpixel image registration algorithm, which is used to calculate vector of the time varying flow field on the hardware platform, which generates the two channels of trigger signals with the adjustable time delay according to the instantaneous calculating vector of flow field. Experiments were performed for several time varying flows to verify the effectiveness of the image formation system and the results shows that the accuracy was improved in calculating the vector of the flow field based on such image formation system to some extent.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we proposed a new classification method based on support vector machine (SVM) combined with multi-scale segmentation. The proposed method obtains satisfactory segmentation results which are based on both the spectral characteristics and the shape parameters of segments. SVM method is used to label all these regions after multiscale segmentation. It can effectively improve the classification results. Firstly, the homogeneity of the object spectra, texture and shape are calculated from the input image. Secondly, multi-scale segmentation method is applied to the RS image. Combining graph theory based optimization with the multi-scale image segmentations, the resulting segments are merged regarding the heterogeneity criteria. Finally, based on the segmentation result, the model of SVM combined with spectrum texture classification is constructed and applied. The results show that the proposed method can effectively improve the remote sensing image classification accuracy and classification efficiency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The most common and frequently occurring neurological disorder is epilepsy and the main method useful for the diagnosis of epilepsy is electroencephalogram (EEG) signal analysis. Due to the length of EEG recordings, EEG signal analysis method is quite time-consuming when it is processed manually by an expert. This paper proposes the application of Linear Graph Embedding (LGE) concept as a dimensionality reduction technique for processing the epileptic encephalographic signals and then it is classified using Sparse Representation Classifiers (SRC). SRC is used to analyze the classification of epilepsy risk levels from EEG signals and the parameters such as Sensitivity, Specificity, Time Delay, Quality Value, Performance Index and Accuracy are analyzed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The conventional message-passing schedule for LDPC decoding algorithms is the so-called flooding schedule. It has the disadvantage that the updated messages cannot be used until next iteration, thus reducing the convergence speed . In this case, the Layered Decoding algorithm (LBP) based on serial message-passing schedule is proposed. In this paper the decoding principle of LBP algorithm is briefly introduced, and then proposed its two improved algorithms, the grouped serial decoding algorithm (Grouped LBP) and the semi-serial decoding algorithm .They can improve LBP algorithm's decoding speed while maintaining a good decoding performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a novel phase retrieval algorithm is presented which combines the advantages of the Transport of Intensity Equation (TIE) method and the iteration method. TIE method is fast, but its precision is not high. Though the convergence rate of iteration method is slow, its result is more accurate. This algorithm consists of Iterative Angular Spectrum (IAS) method to utilize the physical constraints between the object and the spectral domain, and the relationship between the intensity and phase among the wave propagation. Firstly, the phase at the object plane is calculated from two intensity images by TIE. Then this result is treated as the initial phase of the IAS. Finally, the phase information at the object plane is acquired according the reversibility of the optical path. During the iteration process, the feedback mechanism is imposed on it that improve the convergence rate and the precision of phase retrieval and the simulation results are given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.