PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 9631, including the Title Page, Copyright information, Table of Contents, Authors, Introduction (if any), and Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper discusses the problem of segmenting foreground objects with apertures or discontinuities under camouflage effect and the optical physics model is introduced into foreground detection. A moving foreground objects extraction method based on color invariants is proposed in which color invariants are used as descriptors to model the background and do the foreground segmentation. It makes full use of the color spectral information and spatial configuration. Experimental results demonstrate that the proposed method performs well in various situations of color similarity and meets the demand of real-time performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Since web born-digital images have low resolution and dense text atoms, text region over-merging and miss detection are still two open issues to be addressed. In this paper a novel iterative algorithm is proposed to locate and segment text regions. In each iteration, the candidate text regions are generated by detecting Maximally Stable Extremal Region (MSER) with diminishing thresholds, and categorized into different groups based on a new similarity graph, and the texted region groups are identified by applying several features and rules. With our proposed overlap checking method the final well-segmented text regions are selected from these groups in all iterations. Experiments have been carried out on the web born-digital image datasets used for robust reading competition in ICDAR 2011 and 2013, and the results demonstrate that our proposed scheme can significantly reduce both the number of over-merge regions and the lost rate of target atoms, and the overall performance outperforms the best compared with the methods shown in the two competitions in term of recall rate and f-score at the cost of slightly higher computational complexity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Copy-move is one of the most common methods for image manipulation. Several methods have been proposed to detect and locate the tampered regions, while many methods failed when the copied regions are rotated before being pasted. A rotational invariant detecting method using Polar Complex Exponential Transform (PCET) is proposed in this paper. Firstly, the original image is divided into overlapping circular blocks, and PCET is employed to each block to extract the rotation-invariant robust features. Secondly, the Approximate Nearest Neighbors (ANN) of each feature vector are collected by Locality Sensitive Hashing (LSH). Experimental results show that the proposed technique is robust to rotation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As the limit of the number of pixels and the size of the sensor, the accuracy of a single pixel cannot satisfy the demands on the machining precision while we capture only one picture of a large component. In this paper, we propose a new rapid image stitching method to solve this problem which is based on the positions of the images and the eigenvalues search method. This method divides the images to a series of grid points at the same step length, and stitches the images into a composite image. The experiment results show that the precision of the stitching process is ±5 microns which can meet the requirements of manufacturing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an improved filter method based on the lately proposed method of IFPGF[1] which is peer group-based. The IFPGF method improves the trade-off between computational efficiency and filtering quality of previous peer group-based methods and gains a good filtering quality at relatively low density of noisy pixels. But when the noisy density goes high(≥20%), the IFPGF method cannot work well. So in this paper, we propose an improved method to fix the drawbacks on filtering the salt-and-pepper impulsive noise. Experimental results suggest that the proposed method is able to outperform the classical vector filters and the recent proposed p eer group-based filters, including IFPGF.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Support Vector Regression performs well on estimating illumination chromaticity in a scene. Then the concept of Least Squares Support Vector Regression has been put forward as an effective, statistical and learning prediction model. Although it is successful to solve some problems of estimation, it also has obvious defects. Due to a large amount of support vectors which are chosen in the process of training LS-SVR , the calculation become very complex and it lost the sparsity of SVR. In this paper, we get inspiration from WLS-SVM(Weighted Least Squares Support Vector Machines) and a new method for sparse model. A Density Weighted Pruning algorithm is used to improve the sparsity of LS-SVR and named SLS-SVR(Sparse Least Squares Support Vector Regression).The simulation indicates that only need to select 30 percent of support vectors, the prediction can reach to 75 percent of the original one.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Creation and selection of relevant features for image classification is a process requiring significant involvement of domain knowledge. It is thus desirable to cover at least part of that process with semi-automated techniques capable of discovering and visualizing those geometric characteristics of images that are potentially relevant to the classification objective. In this work, we propose utilizing the multi-scale singular value decomposition (MSVD), which can be efficiently run on large high-dimensional datasets. We apply this technique to create a multi-scale representation of overhead satellite images of various types of vessels, with the objective of identifying those types. We augment the original set of pixel data with features obtained by applying the MSVD to multi-scale patches of the images. The result is then processed using a linear Support Vector Machine (SVM) algorithm. The classification rule obtained is significantly better than the one based on the original pixel space. The generic nature of the MSVD mechanism and standard mechanisms used for classification (SVM) suggest a wider utility of the proposed approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
K-means is a classic unsupervised learning clustering algorithm. In theory, it can work well in the field of image segmentation. But compared with other segmentation algorithms, this algorithm needs much more computation, and segmentation speed is slow. This limits its application. With the emergence of general-purpose computing on the GPU and the release of CUDA, some scholars try to implement K-means algorithm in parallel on the GPU, and applied to image segmentation at the same time. They have achieved some results, but the approach they use is not completely parallel, not take full advantage of GPU’s super computing power. K-means algorithm has two core steps: label and update, in current parallel realization of K-means, only labeling is parallel, update operation is still serial. In this paper, both of the two steps in K-means will be parallel to improve the degree of parallelism and accelerate this algorithm. Experimental results show that this improvement has reached a much quicker speed than the previous research.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Nowadays, ship detection in sea-sky background is not only useful in maritime visual surveillance, but also helpful in maritime search and rescue. Since ships are salient objects in infrared images with sea-sky background, we present a novel and effective algorithm based on saliency for ship detection in this situation. Our algorithm adopts global saliency, local saliency and background prior to generate saliency maps. Ships are finally segmented in saliency maps. Our algorithm is compared with four classic salient object detection algorithms. And experimental results show our algorithm outperforms the other four algorithms in qualitative and quantitative terms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Copy-move forgery is one of the most simple and commonly used forging methods, where a part of image itself is copied and pasted on another part of the same image. This paper presents a new approach for copy-move forgery detection where fractional Fourier transform (FRFT) is used. First, the 1-level discrete wavelet transform (DWT) of the forged image is to reduce its dimension. Next, the low frequency the sub-band is divided into overlapped blocks of equal size. The fractional Fourier transform of each block is calculated and the vector of the coefficients is constructed. All feature vectors are sorted using lexicographical order. Finally, the difference of adjacent feature vectors is evaluated and employed to locate the duplicated regions which have the same feature vectors. Experimental results show that the proposed method is effective in detection of the copy-move forgery regions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image segmentation plays a crucial role in effective understanding of digital images. However, the research on the existence of general purpose segmentation algorithm that suits for variety of applications is still very much active. Among the many approaches in performing image segmentation, graph based approach is gaining popularity primarily due to its ability in reflecting global image properties. Volumetric image segmentation can simply result an image partition composed by relevant regions, but the most fundamental challenge in segmentation algorithm is to precisely define the volumetric extent of some object, which may be represented by the union of multiple regions. The aim in this paper is to present a new method to detect visual objects from color volumetric images and efficient threshold. We present a unified framework for volumetric image segmentation and contour extraction that uses a virtual tree-hexagonal structure defined on the set of the image voxels. The advantage of using a virtual tree-hexagonal network superposed over the initial image voxels is that it reduces the execution time and the memory space used, without losing the initial resolution of the image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the rapid development of digital technology, the data increased greatly in both static image and dynamic video image. It is noticeable how to decrease the redundant data in order to save or transmit information more efficiently. So the research on image compression becomes more and more important. Using GPU to achieve higher compression ratio has superiority in interactive remote visualization. Contrast to CPU, GPU may be a good way to accelerate the image compression. Currently, GPU of NIVIDIA has evolved into the eighth generation, which increasingly dominates the high-powered general purpose computer field. This paper explains the way of GPU encoding image. Some experiment results are also presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
When facing the "overabundant" of semantic web information, in this paper, the researcher proposes the hierarchical classification and visualizing RIA (Rich Internet Application) navigation system: Concept Map (CM) + Semantic Structure (SS) + the Knowledge on Demand (KOD) service. The aim of the Multimedia processing and empirical applications testing, was to investigating the utility and usability of this visualizing navigation strategy in web communication design, into whether it enables the user to retrieve and construct their personal knowledge or not. Furthermore, based on the segment markets theory in the Marketing model, to propose a User Interface (UI) classification strategy and formulate a set of hypermedia design principles for further UI strategy and e-learning resources in semantic web communication. These research findings: (1) Irrespective of whether the simple declarative knowledge or the complex declarative knowledge model is used, the “CM + SS + KOD navigation system” has a better cognition effect than the “Non CM + SS + KOD navigation system”. However, for the” No web design experience user”, the navigation system does not have an obvious cognition effect. (2) The essential of classification in semantic web communication design: Different groups of user have a diversity of preference needs and different cognitive styles in the CM + SS + KOD navigation system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes a new image thresholding method by integrating Multi-scale Gradient Multiplication (MGM) transformation and Adjusted Rand Index (ARI). The proposed method evaluates the optimal threshold by computing the accumulation similarity between two image collections from the perspective of global spatial attributes of images. One of the image collections are obtained by binarizing the original gray level image with each possible gray level. The others are the reference images, produced by binarizing MGM image. The MGM image is the result of applying MGM transformation to the original image. ARI is a similarity measurement in statistics, particularly in data clustering, which can be readily computed based on two image matrices. To be more accurate, the optimal threshold is determined by maximizing the accumulation similarity of ARI. Comparisons with three well established thresholding methods are depicted for numbers of real-world images. Experiment results demonstrate the effectiveness and robustness of the proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
De-noising is a classic problem in image processing, however, keep the image edges and details characteristics and de-noising is a contradiction of the relationship. Based on this defect, in this paper, the curvature and gradient are introduced to improve the operator of direction diffusion, which is able to remove the noise t also makes a good save edge features and details of the role, inhibit noise isolation at the same time. Experimental results show that the image de-noising methods can have good results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Edge detection is necessary for image segmentation and pattern recognition. In this paper, an improved Canny edge detection approach is proposed due to the defect of traditional algorithm. A modified bilateral filter with a compensation function based on pixel intensity similarity judgment was used to smooth image instead of Gaussian filter, which could preserve edge feature and remove noise effectively. In order to solve the problems of sensitivity to the noise in gradient calculating, the algorithm used 4 directions gradient templates. Finally, Otsu algorithm adaptively obtain the dual-threshold. All of the algorithm simulated with OpenCV 2.4.0 library in the environments of vs2010, and through the experimental analysis, the improved algorithm has been proved to detect edge details more effectively and with more adaptability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to fuse the visible and infrared images captured in low visibility conditions, a method based on shearlet transform(ST) and image enhancement is proposed in this paper. Shearlets are equipped with a simple mathematical structure similar to wavelets, which are associated to a multi-scale analysis. An image could be decomposed by ST in any scale and any direction, in which shearlets show a greater ability to fully capture the intrinsic geometrical features of multidimensional phenomena. Firstly, the infrared and visible images are decomposed by ST respectively; meanwhile all the sub-images of R, G and B channels of visible image are enhanced. Secondly, the statistical character of the directional coefficients decomposed by ST meet the generalized Gaussian distribution (GGD). So, the coefficients are estimated using absolute moment estimation in local neighbor in directional coefficients. The estimated scale parameter is used to measure the saliency and compute the weight. The fused coefficients are obtained by the weighted average and are reconstructed the final fused image. The results of experiment show that the fused image has the maximum value of entropy and is more accord with the human visual system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Due to physical structures and motion attitudes, the IR radiative properties of ballistic targets are different during their flights. However, such differences cannot be easily detected by high-speed observing platform under the influence of detector noise, consequently causing difficulties with the classification and recognition of targets. This paper presents a modeling and simulation of the IR radiative properties of ballistic targets, provides a discussion on the variations in the IR radiative properties among different targets, and proposes a method for a parametric expression of the grayscale time series of the targets under noise. The experimental result indicates that by constructing a hybrid model of tendency, period and noise, an effective feature of the time series can be extracted using de-noising, curve-fitting, and frequency transformation, which ultimately contributes to the classification of targets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a novel reflection based method to estimate the local orientation of a specular surface. For a calibrated scene with a fixed light band, the band is reflected by the surface to the image plane of a camera. Then the local geometry between the surface and reflected band is estimated. Firstly, in order to find the relationship relying the object position, the object surface orientation and the band reflection, we study the fundamental theory of the geometry between a specular mirror surface and a band source. Then we extend our approach to the spherical surface with arbitrary curvature. Experiments are conducted with mirror surface and spherical surface. Results show that our method is able to obtain the local surface orientation merely by measuring the displacement and the form of the reflection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The purpose of detecting sharp changes in image brightness is to capture important events and changes in properties of the world. The accuracy of edge detection methods in image processing determines the eventual success or failure of computerized analysis procedures which follow the initial edge detection determinations such as object recognition. Generally, edge detectors have been designed to capture simple ideal step functions in image data, but real image signal discontinuities deviate from this ideal form. Another three types of deviations from the step function which relate to real distortions occurring in natural images are examined according to their characteristics. These types are impulse, ramp, and sigmoid functions which respectively represent narrow line signals, simplified blur effects, and more accurate blur modeling. General rules for edge pattern characterization based upon the classification of edge types into four categories-ramp, impulse, step, and sigmoid (RISS) are developed from this analysis. Additionally, the proposed algorithm performs connectivity analysis on edge map to ensure that small, disconnected edges are removed. The performance analysis on experiments supports that the proposed edge detection algorithm with edge pattern analysis and characterization does lead to more effective edge detection and localization with improved accuracies. To expand the proposed algorithm into real-time applications, a parallel implementation on a graphics processing unit (GPU) is presented in this paper. For the various configurations in our test, the GPU implementation shows a scalable speedup as the resolution of an image increases. We also achieved 14 frames per second in real-time processing (1280×720).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For the problem of low efficiency in SIFT algorithm while using exhaustive method to search the nearest neighbor and next nearest neighbor of feature points, this paper introduces K-D tree algorithm, to index the feature points extracted in database images according to the tree structure, at the same time, using the concept of a weighted priority, further improves the algorithm, to further enhance the efficiency of feature matching.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image matching plays a very important role in the field of medical image, while the two image registration methods based on the mutual information and the optical flow are very effective. The experimental results show that the two methods have their prominent advantages. The method based on mutual information is good for the overall displacement, while the method based on optical flow is very sensitive to small deformation. In the breast DCE-MRI images studied in this paper, there is not only overall deformation caused by the patient, but also non rigid small deformation caused by respiratory deformation. In view of the above situation, the single-image registration algorithms cannot meet the actual needs of complex situations. After a comprehensive analysis to the advantages and disadvantages of these two methods, this paper proposes a registration algorithm of combining mutual information with optical flow field, and applies subtraction images of the reference image and the floating image as the main criterion to evaluate the registration effect, at the same time, applies the mutual information between image sequence values as auxiliary criterion. With the test of the example, this algorithm has obtained a better accuracy and reliability in breast DCE-MRI image sequences.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Existing visual saliency detection methods are usually based on single image, however, without priori knowledge, the contents of single image are ambiguous, so visual saliency detection based on single image can’t extract region of interest. To solve it, we propose a novel saliency detection based on multi-instance images. Our method considers human’s visual psychological factors and measures visual saliency based on global contrast, local contrast and sparsity. It firstly uses multi-instance learning to get the center of clustering, and then computes feature relative dispersion. By fusing different weighted feature saliency map, the final synthesize saliency map is generated. Comparing with other saliency detection methods, our method increases the rate of hit.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The trigonometric polynomial spline surface generated over the space {1, sint, cost, sin2t, cos2t} is presented in this work. The proposed surface can automatically interpolate all the given data points and satisfy C2 continuous without solving equation systems. Then, image zooming making use of the proposed surface is investigated. Experimental results show that the proposed surface is effective for dealing with image zooming problems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Advances in X-ray microtomography (XMT) are opening new opportunities for examining soil structural properties and fluid distribution around living roots in-situ. The low contrast between moist soil, root and air-filled pores in XMT images presents a problem with respect to image segmentation. In this paper, we develop an unsupervised method for segmenting XMT images to pores (air and water), soil, and root regions. A feature-based segmentation method is provided to isolate regions that consist of similar texture patterns from an image based on the normalized inverse difference moment of gray-level co-occurrence matrix. The results obtained show that the combination of features, clustering, and post-processing techniques has advantageous over other advanced segmentation methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The similarity measurement of the bilateral filtering can’t indicate the difference between pixels accurately in the dense texture region. It causes the smoothing effect seriously, thereby reducing the edge preserving properties of the bilateral filtering. This paper presents a new weighted function of the bilateral filtering. It involves an additional range kernel using the relative difference between pixels. The range kernel operates differently by acting on the pixel gray intensities or colors. And it uses the reciprocal kernel for the approximation of the standard Gaussian weight value. Experimental results suggest that it significantly preserves more details than the classical bilateral filtering in edge or dense texture regions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a novel mixture of links model to segment an object observed from multiple viewpoints. Each component in this mixture represents a temporal linkage between superpixels from all the viewpoints, hence expressing the inter-view consistency. The principle goal is to find the maximum a posterior estimate of appearance models and the exact bounding-box of object in each view. To this end, the segmentation is casted as finding more comprehensive and accurate samples using the mixture of links model. In contrast to most existing multi-view co-segmentation methods that rely on time-consuming 3D information, our method only uses 2D cues to achieve faster speed without decreasing the accuracy. The experimental results confirm the effectiveness of our approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A technique for building consistent 3D reconstructions from unordered large image sets based on a global linear method is presented. When views are treated incrementally, this external calibration can be subjected to drift, contrary to global methods that distribute residual errors evenly. We propose a combined global linear method based on computing consistent measurements in three views. First, all global camera rotations are computed from relative rotation estimates of pairwise image matches. Second, we minimize an approximate geometric error and projection error of feature points to find a linear relationship in camera triplets. This step can efficiently remove incorrect triplets which is very important for global reconstruction. Third, these triplets can be directly scaled up to register multiple cameras which can serve as a good initialization for final bundle adjustment. The performance of the proposed method is tested on several well-known image sets and the result is accurate and robust.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Color constancy is an important problem in machine vision and image processing fields. We propose a new method in this paper that is based on detail information description to estimate the chromaticity of the light source and restore the real color property of captured images. The main idea of the proposed approach is that according to human vision characteristics use the interest information in an image to estimate the lighting condition of real scene. To approve the proposed method, two well-known algorithms are selected and their contrast results are also presented. It is shown in this paper that the proposed approach performs better than other traditional methods for color constancy most of the time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the purposes to employ modern technologies in agricultural and food industry is to increase the efficiency and automation of production processes, which helps improve productive effectiveness of business enterprises, thus making them more competitive. Nowadays, a challenge presents itself for this branch of economy, to produce agricultural and food products characterized by the best parameters in terms of quality, while maintaining optimum production and distribution costs of the processed biological material. Thus, several scientific centers seek to devise new and improved methods and technologies in this field, which will allow to meet the expectations. A new solution, under constant development, is to employ the so-called machine vision which is to replace human work in both quality and quantity evaluation processes. An indisputable advantage of employing the method is keeping the evaluation unbiased while improving its rate and, what is important, eliminating the fatigue factor of the expert. This paper elaborates on the topic of quality evaluation by marking the contamination in malting barley grains using computer image analysis and selected methods of artificial intelligence [4-5].
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Through the study of the existing image retrieval technology, in this paper, a new design scheme of semantics-based image retrieval system is presented. Based on the establishment of mapping relationship between the low-level image features and the low layer of semantic image, this scheme associates the low layer of semantic image with high-level semantics, thus realizing hierarchical semantics description structure, to improve the high-level semantic image recognition accuracy rate.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To satisfy the requirement of the astronomical observation, a novel timing sequence of frame transfer CCD is proposed. The multiple functions such as the adjustments of work pattern, exposure time and frame frequency are achieved. There are four work patterns: normal, standby, zero exposure and test. The adjustment of exposure time can set multiple exposure time according to the astronomical observation. The fame frequency can be adjusted when dark target is imaged and the maximum exposure time cannot satisfy the requirement. On the design of the video processing, offset correction and adjustment of multiple gains are proposed. Offset correction is used for eliminating the fixed pattern noise of CCD. Three gains pattern can improve the signal to noise ratio of astronomical observation. Finally, the images in different situations are collected and the system readout noise is calculated. The calculation results show that the designs in this paper are practicable.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The level of marbling in meat assessment based on digital images is very popular, as computer vision tools are becoming more and more advanced. However considering muscle cross sections as the data source for marbling level evaluation, there are still a few problems to cope with. There is a need for an accurate method which would facilitate this evaluation procedure and increase its accuracy. The presented research was conducted in order to compare the effect of different image segmentation tools considering their usefulness in meat marbling evaluation on the muscle anatomical cross – sections. However this study is considered to be an initial trial in the presented field of research and an introduction to ultrasonic images processing and analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an image classification model developed to classify images embedded in commercial real estate flyers. It is a component in a larger, multimodal system which uses texts as well as images in the flyers to automatically classify them by the property types. The role of the image classifier in the system is to provide the genres of the embedded images (map, schematic drawing, aerial photo, etc.), which to be combined with the texts in the flyer to do the overall classification. In this work, we used an ensemble learning approach and developed a model where the outputs of an ensemble of support vector machines (SVMs) are combined by a k-nearest neighbor (KNN) classifier. In this model, the classifiers in the ensemble are strong classifiers, each of which is trained to predict a given/assigned genre. Not only is our model intuitive by taking advantage of the mutual distinctness of the image genres, it is also scalable. We tested the model using over 3000 images extracted from online real estate flyers. The result showed that our model outperformed the baseline classifiers by a large margin.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Gel electrophoresis (GE) is one of the most used method to separate DNA, RNA, protein molecules according to size, weight and quantity parameters in many areas such as genetics, molecular biology, biochemistry, microbiology. The main way to separate each molecule is to find borders of each molecule fragment. This paper presents a software application that show columns edges of DNA fragments in 3 steps. In the first step the application obtains lane histograms of agarose gel electrophoresis images by doing projection based on x-axis. In the second step, it utilizes k-means clustering algorithm to classify point values of lane histogram such as left side values, right side values and undesired values. In the third step, column edges of DNA fragments is shown by using mean algorithm and mathematical processes to separate DNA fragments from the background in a fully automated way. In addition to this, the application presents locations of DNA fragments and how many DNA fragments exist on images captured by a scientific camera.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The aim of this study was investigate the possibility of using methods of computer image analysis for the assessment and classification of morphological variability and the state of health of horse navicular bone. Assumption was that the classification based on information contained in the graphical form two-dimensional digital images of navicular bone and information of horse health. The first step in the research was define the classes of analyzed bones, and then using methods of computer image analysis for obtaining characteristics from these images. This characteristics were correlated with data concerning the animal, such as: side of hooves, number of navicular syndrome (scale 0-3), type, sex, age, weight, information about lace, information about heel. This paper shows the introduction to the study of use the neural image analysis in the diagnosis of navicular bone syndrome. Prepared method can provide an introduction to the study of non-invasive way to assess the condition of the horse navicular bone.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a novel transmission-type visibility meter which is also named double reflection transmission-type visibility meter is introduced and developed. The novel visibility meter uses a charge coupled device (CCD) as an image acquiring unit, the CCD acquires light spot images which generated by a light source, an air extinction coefficient is calculated, and then the meteorology visibility is obtained. The light source is an important unit in this novel visibility meter, and influences the meteorology visibility calculation results. In this paper, several light source design schemes are proposed and researched. Each light source scheme is tested and the experimental results are analyzed. Finally the novel visibility meter which employs a determined light source design scheme finished a measurement result comparison experiment and the reliability and accuracy of the visibility meter are proved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a part of research, whose goal was to develop an effective method to determine marbling classes of lamb carcasses, with the neural image analysis techniques. Current methods for identifying the degree of intramuscular fat level content are time consuming, require specialized expertise and often rely on subjective assessment based on predefined patterns. In this paper, authors proposes the use of neural model developed as a tool to assist evaluation of marbling.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes a technique and an algorithm used to build a device for people identification through the processing of a low resolution camera image. The infrared channel is the only information needed, sensing the blood reaction with the proper wave length, and getting a preliminary snapshot of the vascular map of the back side of the hand. The software uses this information to extract the characteristics of the user in a limited area (region of interest, ROI), unique for each user, which applicable to biometric access control devices. This kind of recognition prototypes functions are expensive, but in this case (minimalist design), the biometric equipment only used a low cost camera and the matrix of IR emitters adaptation to construct an economic and versatile prototype, without neglecting the high level of effectiveness that characterizes this kind of identification method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper seeks to present methods of neural image analysis aimed at estimating the maturity state of selected varieties of apples which are popular in Poland. An identification of the degree of maturity of selected varieties of apples has been conducted on the basis of information encoded in graphical form, presented in the digital photos. The above process involves the application of the BBCH scale, used to determine the maturity of apples. The aforementioned scale is widely used in the EU and has been developed for many species of monocotyledonous plants and dicotyledonous plants. It is also worth noticing that the given scale enables detailed determinations of development stage of a given plant. The purpose of this work is to identify maturity level of selected varieties of apples, which is supported by the use of image analysis methods and classification techniques represented by artificial neural networks. The analysis of graphical representative features based on image analysis method enabled the assessment of the maturity of apples. For the utilitarian purpose the ”JabVis 1.1” neural IT system was created, in accordance with requirements of the software engineering dedicated to support the decision-making processes occurring in broadly understood production process and processing of apples.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The aim of this paper was to extract the representative features and generate an appropriate neural model for classification of varieties of edible potato. Potatoes of variety the Vineta and the Denar were the empirical object of this thesis. The main concept of the project was to develop and prepare an image database using the computer image analysis software. The choice of appropriate neural model the one which will have the greatest abilities to identify the selected variety. The aim of this project is ultimately to conduct assistance and accelerate work of the expert, who classifies and keeps different varieties of potatoes in heaps.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The aim of the study was to determine the possibility of analysis of C:N ratio in the chicken manure and wheat straw mixture. This paper presents preliminary assumptions and parameters of extraction characteristics process. It also presents an introduction of digital image analysis of chicken manure and wheat straw mixture. This work is an introduction to the study on develop computer system that could replace chemical analysis. Good understanding the value of dependence C:N on the basis of image analysis will help in selection of optimal conditions for biological waste treatment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The aim of this research was investigate the possibility of using methods of computer image analysis and artificial neural networks for to assess the amount of dry matter in the tested compost samples. The research lead to the conclusion that the neural image analysis may be a useful tool in determining the quantity of dry matter in the compost. Generated neural model may be the beginning of research into the use of neural image analysis assess the content of dry matter and other constituents of compost. The presented model RBF 19:19-2-1:1 characterized by test error 0.092189 may be more efficient.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This study established identification manners with self-organizing feature map (SOM) to achieve the goal of monitoring Engineering Change (EC) based on historical data of a company that specializes in computers and peripherals. The product life cycle of this company is 3–6 months. The historical data were divided into three parts, each covering four months. The first part, comprising 2,343 records from January to April (the training period), comprise the Control Group. The second and third parts comprise Experimental Groups (EG) 1 and 2, respectively. For EG 1 and 2, the successful rate of recognizing information on abnormal ECs was approximately 96% and 95%, respectively. This paper shows the importance and screening procedures of abnormal engineering change for a particular company specializing in computers and peripherals.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Online Publication Date: July 6, 2015
Withdrawn from Publication: April 11, 2016
This paper was retracted from the SPIE Digital Library on April 11, 2016, by the publisher upon verification that substantial portions of the paper were copied from the following work without attribution or permission:
M.M.J. Gerlach and C.T. Rooijers, “3D Face Recognition: Data Processing: Registration and Deformation,” Bachelor of Science Thesis, Delft University of Technology, September 7, 2013.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Common face detection methods may fail in videos captured by patrol cars for the low resolution and uncooperative situation. We proposed a method to handle this problem with a parts-based deep model. Different parts of human bodies are detected for improving the accuracy of face detection in this method. A deep neural network is used for combining the detections of different parts. Experiments were conducted on two different datasets. The results demonstrate that the proposed method outperforms existing common face detection methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a novel spatiotemporal feature-based method is proposed to recognize facial expressions from depth video. Independent Component Analysis (ICA) spatial features of the depth faces of facial expressions are first augmented with the optical flow motion features. Then, the augmented features are enhanced by Fisher Linear Discriminant Analysis (FLDA) to make them robust. The features are then combined with on Hidden Markov Models (HMMs) to model different facial expressions that are later used to recognize appropriate expression from a test expression depth video. The experimental results show superior performance of the proposed approach over the conventional methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image-based modeling and rendering is currently one of the most challenging topics in Computer Vision and Photogrammetry. The key issue here is building a set of dense correspondence points between two images, namely dense matching or stereo matching. Among all dense matching algorithms, Semi-Global Matching (SGM) is arguably one of the most promising algorithms for real-time stereo vision. Compared with global matching algorithms, SGM aggregates matching cost from several (eight or sixteen) directions rather than only the epipolar line using Dynamic Programming (DP). Thus, SGM eliminates the classical “streaking problem” and greatly improves its accuracy and efficiency. In this paper, we aim at further improvement of SGM accuracy without increasing the computational cost. We propose setting the penalty parameters adaptively according to image edges extracted by edge detectors. We have carried out experiments on the standard Middlebury stereo dataset and evaluated the performance of our modified method with the ground truth. The results have shown a noticeable accuracy improvement compared with the results using fixed penalty parameters while the runtime computational cost was not increased.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In view of the basic Windows login password input way lacking of safety and convenient operation, we will introduce the biometrics technology, face recognition, into the computer to login system. Not only can it encrypt the computer system, also according to the level to identify administrators at all levels. With the enhancement of the system security, user input can neither be a cumbersome nor worry about being stolen password confidential.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Computerized human face detection is an important task of deformable pattern recognition in today's world. Especially in cooperative authentication scenarios like ATM fraud detection, attendance recording, video tracking and video surveillance, the accuracy of the face detection engine in terms of accuracy, memory utilization and speed have been active areas of research for the last decade. The Haar based face detection or SIFT and EBGM based face recognition systems are fairly reliable in this regard. But, there the features are extracted in terms of gray textures. When the input is a high resolution online video with a fairly large viewing area, Haar needs to search for face everywhere (say 352×250 pixels) and every time (e.g., 30 FPS capture all the time). In the current paper we have proposed to address both the aforementioned scenarios by a neuro-visually inspired method of figure-ground segregation (NFGS) [5] to result in a two-dimensional binary array from gray face image. The NFGS would identify the reference video frame in a low sampling rate and updates the same with significant change of environment like illumination. The proposed algorithm would trigger the face detector only when appearance of a new entity is encountered into the viewing area. To address the detection accuracy, classical face detector would be enabled only in a narrowed down region of interest (RoI) as fed by the NFGS. The act of updating the RoI would be done in each frame online with respect to the moving entity which in turn would improve both FR (False Rejection) and FA (False Acceptance) of the face detection system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years, Human Action Recognition (HAR) has attracted much attention from the research community due to its challenges as well as wide applications. In this paper, we investigate GMM supervector based Universal Background Model (UBM) and Support Vector Machine (SVM) with dense trajectories and motion bound features for HAR system. A GMM supervector is obtained by adapting with UBM and cascading all the mean vector components. After that, supervectors are applied as input features to SVM classifier. Moreover, we also adopted two modified GMM KL and GUMI kernels in this research. Then we make a comparison and critical analysis of our method with previous systems. Experimental results demonstrate that the proposed approach performs more efficient than current systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes a novel human action recognition framework named Hidden Markov Model (HMM) based Hybrid Event Probability Sequence (HEPS), which can recognize unlabeled actions from videos. First, motion trajectories are effectively extracted using the centers of moving objects. Secondly, the HEPS is constructed using the trajectories and represents different human actions. Finally, the improved Particle Swarm Optimization (PSO) with inertia weight is introduced to recognize human actions using HMM. The proposed methods are evaluated on UCF Human Action Dataset and achieve 76.67% accurate rate. The comparative experiments results demonstrate that the HMM got superior results with HEPS and PSO.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes an acceleration method for large-scale face recognition system. When dealing with a large-scale database, face recognition is time-consuming. In order to tackle this problem, we employ the k-means clustering algorithm to classify face data. Specifically, the data in each cluster are stored in the form of the kd-tree, and face feature matching is conducted with the kd-tree based nearest neighborhood search. Experiments on CAS-PEAL and self-collected database show the effectiveness of our proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of most worked issues in the last years in robotics has been the study of strategies to path planning for mobile robots in static and observable conditions. This is an open problem without pre-defined rules (non-heuristic), which needs to measure the state of the environment, finds useful information, and uses an algorithm to select the best path. This paper proposes a simple and efficient geometric path planning strategy supported in digital image processing. The image of the environment is processed in order to identify obstacles, and thus the free space for navigation. Then, using visibility graphs, the possible navigation paths guided by the vertices of obstacles are produced. Finally the A* algorithm is used to find a best possible path. The alternative proposed is evaluated by simulation on a large set of test environments, showing in all cases its ability to find a free collision plausible path.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is easy to retrieve the small size parts from small videos. It is also easy to retrieve the middle size part from large videos. However, we have difficulties to retrieve the small size parts from large videos. We have large needs for estimating plays in sport videos. Plays in sports are described as the motions of players. This paper proposes the play retrieving method based on both motion compensation vectors and normal color frames in MPEG sports videos. This work uses the 1-dimensional degenerated descriptions of each motion image between two adjacent frames. Connecting the 1-dimensional degenerated descriptions on time direction, we have the space-time map. This spacetime map describes a sequence of frames as a 2-dimensional image. Using this space-time map on motion compensation vector frames and normal color frames, this work shows the method to create a new better template from a single template for retrieving a small number of plays in a huge number of frames. In an experiment, the resulting F-measure marks 0.955.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Through the in-depth study on the existing fingerprint identification technologies, combined with the actual characteristics of the embedded system, this paper improves the existing fingerprint identification algorithm, reducing the time complexity of the matching algorithm. The experimental results show that the fingerprint identification algorithm proposed in this paper can perfectly meet the requirements of embedded system, therefore has high practical value.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As the main challenge for target tracking is accounting for target scale change and real-time, we combine Mean-Shift and PCA-SIFT algorithm together to solve the problem. We introduce similarity comparison method to determine how the target scale changes, and taking different strategies according to different situation. For target scale getting larger will cause location error, we employ backward tracking to reduce the error. Mean-Shift algorithm has poor performance when tracking scale-changing target due to the fixed bandwidth of its kernel function. In order to overcome this problem, we introduce PCA-SIFT matching. Through keypoint matching between target and template that adjusting the scale of tracking window adaptively can be achieved. Because this algorithm is sensitive to wrong match, we introduce RANSAC to reduce mismatch as far as possible. Furthermore target relocating will trigger when number of match is too small. In addition we take comprehensive consideration about target deformation and error accumulation to put forward a new template update method. Experiments on five image sequences and comparison with 6 kinds of other algorithm demonstrate favorable performance of the proposed tracking algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper is mainly to solve the problems associated with maneuvering target tracking based current statistical model in three dimensional space. Firstly, a three-dimensional model of the nine state variables is presented. Then adaptive Kalman filtering algorithm is designed with the motor acceleration data mean and variance. Finally, A simulation about the adaptive Kalman filtering put forward by this thesis and the direct calculation method is given, which aim at the maneuvering target in three-dimension. The results show the good performances such as better target position, velocity and acceleration estimates brought by the proposed approach by presenting and discussing the simulation results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Although recently eye-tracking method has been introduced into behavioral experiments based on dot-probe paradigm, some characteristics in eye-tracking data do not draw as much attention as traditional characteristics like reaction time. It is also necessary to associate eye-tracking data to characteristics of images shown in experiments. In this research, new variables, such as fixation length, times of fixation and times of eye movement, in eye-tracking data were extracted from a behavioral experiment based on dot probe paradigm. They were analyzed and compared to traditional reaction time. After the analysis of positive and negative scenery images, parameters such as hue frequency spectrum PAR (Peak to Average Ratio) were extracted and showed difference between negative and positive images. These parameters of emotional images could discriminate scenery images according to their emotions in an SVM classifier well. Besides, it was found that images’ hue frequency spectrum PAR is obviously relevant to eye-tracking statistics. When the dot was on the negative side, negative images’ hue frequency spectrum PAR and horizontal eye-jumps confirmed to hyperbolic distribution, while that of positive images was linear with horizontal eye-jumps. The result could help to explain the mechanism of human’s attention and boost the study in computer vision.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There is still a lack of effective paradigms and tools for analysing and discovering the contents and relationships of project knowledge contexts in the field of project management. In this paper, a new framework for extracting and representing project knowledge contexts using topic models and dynamic knowledge maps under big data environments is proposed and developed. The conceptual paradigm, theoretical underpinning, extended topic model, and illustration examples of the ontology model for project knowledge maps are presented, with further research work envisaged.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A multi-stage noise adaptive switching filter (MSNASF) is proposed for the restoration of images extremely corrupted by impulse and impulse-like noise. The filter consists of two steps: noise detection and noise removal. The proposed extrema-based noise detection scheme utilizes the false contouring effect to get better over detection rate at low noise density. It is adaptive and will detect not only impulse but also impulse-like noise. In the noise removal step, a novel multi-stage filtering scheme is proposed. It replaces corrupted pixel with the nearest uncorrupted median to preserve details. When compared with other methods, MSNASF provides better peak signal to noise ratio (PSNR) and structure similarity index (SSIM). A subjective evaluation carried out online also demonstrates that MSNASF yields higher fidelity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Supervised machine learning algorithm has been extensively studied and applied to different fields of image processing in past decades. This paper proposes a new machine learning algorithm, called margin setting (MS), for restoring images that are corrupted by salt and pepper impulse noise. Margin setting generates decision surface to classify the noise pixels and non-noise pixels. After the noise pixels are detected, a modified ranked order mean (ROM) filter is used to replace the corrupted pixels for images reconstruction. Margin setting algorithm is tested with grayscale and color images for different noise densities. The experimental results are compared with those of the support vector machine (SVM) and standard median filter (SMF). The results show that margin setting outperforms these methods with higher Peak Signal-to-Noise Ratio (PSNR), lower mean square error (MSE), higher image enhancement factor (IEF) and higher Structural Similarity Index (SSIM).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Auscultation of heart sound (HS) signals serves as an important primary approach to diagnose cardiovascular diseases (CVDs) for centuries. Confronting the intrinsic drawbacks of traditional HS auscultation, computer-aided automatic HS auscultation based on feature extraction technique has witnessed explosive development. Yet, most existing HS feature extraction methods adopt acoustic or time-frequency features which exhibit poor relationship with diagnostic information, thus restricting the performance of further interpretation and analysis. Tackling such a bottleneck problem, this paper innovatively proposes a novel murmur-based HS feature extraction method since murmurs contain massive pathological information and are regarded as the first indications of pathological occurrences of heart valves. Adapting discrete wavelet transform (DWT) and Shannon envelope, the envelope-morphological characteristics of murmurs are obtained and three features are extracted accordingly. Validated by discriminating normal HS and 5 various abnormal HS signals with extracted features, the proposed method provides an attractive candidate in automatic HS auscultation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Camera calibration is one of the essential steps in the computer vision research. This paper describes a real-time OpenCV based camera calibration system, and developed and implemented in the VS2008 environment. Experimental results prove that the system to achieve a simple and fast camera calibration, compared with MATLAB, higher precision and does not need manual intervention, and can be widely used in various computer vision system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Contrast pattern based data mining is concerned with the mining of patterns and models that contrast two or more datasets. Contrast patterns can describe similarities or differences between the datasets. They represent strong contrast knowledge and have been shown to be very successful for constructing accurate and robust clusters and classifiers. The increasing use of contrast pattern data mining has initiated a great deal of research and development attempts in the field of data mining. A comprehensive revision on the existing contrast pattern based data mining research is given in this paper. They are generally categorized into background and representation, definitions and mining algorithms, contrast pattern based classification, clustering, and other applications, the research trends in future. The primary of this paper is to server as a glossary for interested researchers to have an overall picture on the current contrast based data mining development and identify their potential research direction to future investigation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to survive under the conditions with great jamming and interference, fast frequency hopped signal are employed in satellite communication system. This paper discusses the nonlinear phases induced by the equipment and atmosphere, and their influence on the FFH/BPSK tracking loop. Two methods are developed including compensating phase which is based on channel estimation and compensating Doppler frequency based on velocity normalization. Simulation results for a real circuit with proper parameters shows that the degradation due to the demodulation of frequency-hopped is only a fraction of one dB in an AWGN environment under satellite channel.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
At present, most of the researches set amplitude of driving force fixed when detecting weak signal buried in noise utilizing Duffing oscillator. In this paper, we find the critical value of driving force corresponding to critical state varies with noise power. Taken this into consideration, a new adaptive method to detect weak signal is proposed. In this method, the amplitude of driving force is determined by input power. The simulation results indicate that lowest threshold of SNR can be acquired in this method is lower than that in the methods proposed in most papers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Modern access networks are constructed widely by passive optical networks (PONs) to meet the growing bandwidth demand. However, higher bandwidth means more energy consumption. To save energy, a few research works propose the dual-mode energy saving mechanism that allows the ONU to operate between active and sleep modes periodically. However, such dual-mode energy saving design may induce unnecessary power consumption or packet delay increase in the case where only downstream data exist for most of the time. In this paper, we propose a new tri-mode energy saving scheme for Ethernet PON (EPON). The new tri-mode energy saving design, combining the dual-mode saving mechanism with the doze mode, allows the ONU to switch among these three modes alternatively. In the doze mode, the ONU may receive downstream data while keeping its transmitter close. Such scenario is often observed for real time video downstream transmission. Furthermore, the low packet delay of high priority upstream data can be attained through the use of early wake-up mechanism employed in both energy saving modes. The energy saving and system efficiency can thus be achieved jointly while maintaining the differentiated QoS for data with various priorities. Performance results via simulation have demonstrated the effectiveness of such mechanism.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Text localization in natural scene images is an important prerequisite for many content-based image analysis tasks. This paper proposes a novel text localization algorithm. Firstly, a fast pruning algorithm is designed to extract Maximally Stable Extremal Regions (MSER) as basic character candidates. Secondly, these candidates are filtered by using the properties of fitting ellipse and the distribution properties of characters to exclude most non-characters. Finally, a new extremal regions projection merging algorithm is designed to group character candidates into words. Experimental results show that the proposed method has an advantage in speed and achieve relatively high precision and recall rates than the latest published algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With fierce competition in banking industry, more and more banks have realised that accurate customer segmentation is of fundamental importance, especially for the identification of those high-value customers. In order to solve this problem, we collected real data about private banking customers of a commercial bank in China, conducted empirical analysis by applying K-means clustering technique. When determine the K value, we propose a mechanism that meet both academic requirements and practical needs. Through K-means clustering, we successfully segmented the customers into three categories, and features of each group have been illustrated in details.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Nowadays customer attrition is increasingly serious in commercial banks. To combat this problem roundly, mining customer evaluation texts is as important as mining customer structured data. In order to extract hidden information from customer evaluations, Textual Feature Selection, Classification and Association Rule Mining are necessary techniques. This paper presents all three techniques by using Chinese Word Segmentation, C5.0 and Apriori, and a set of experiments were run based on a collection of real textual data that includes 823 customer evaluations taken from a Chinese commercial bank. Results, consequent solutions, some advice for the commercial bank are given in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Convolutional Neural Networks (CNNs), which showed success in achieving translation invariance for many image processing tasks, are investigated for continuous speech recognitions in the paper. Compared to Deep Neural Networks (DNNs), which have been proven to be successful in many speech recognition tasks nowadays, CNNs can reduce the NN model sizes significantly, and at the same time achieve even better recognition accuracies. Experiments on standard speech corpus TIMIT showed that CNNs outperformed DNNs in the term of the accuracy when CNNs had even smaller model size.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A weak signal processing system for spectrum analyzing using low-noise isolation and amplification and low-pass Filter (LPF) methods is developed. In order to extract the spectrum signals from the external disturbance as well as regenerate the detected results dynamically, a method of synchronized isolation and amplification is used to amplify the weak signals exponentially as well as to isolate the noise of random frequency by electromagnetic induction technology, and a LPF is designed to keep the output signals stable. What’s more, an interface based on MFC single document view is programmed to display and save the scanned spectrum diagram. As a result, more than 1.5 million times of amplification, less than 9nAof current drift and a cut-off frequency of 10Hzisequipped for the system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Frequency estimation via signal sorting is widely recognized as one of the most practical technologies in signal processing. However, the estimated frequencies via signal sorting may be inaccurate and biased due to signal fluctuation under different emitter working modes, problems of transmitter circuit, environmental noises or certain unknown interference sources. Therefore, it has become an important issue to further analyze and refine signal frequencies after signal sorting. To address the above problem, we have brought forward an iterative frequency refinement method based on maximum likelihood. Iteratively, the initial estimated signal frequency values are refined. Experimental results indicate that the refined signal frequencies are more informative than the initial ones. As another advantage of our method, noises and interference sources could be filtered out simultaneously. The efficiency and flexibility enables our method to apply in a wide application area, i.e., communication, electronic reconnaissance and radar intelligence analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Common Spatial Pattern (CSP) is one of the most effective feature extraction algorithm for Brain-Computer Interfaces (BCI). Despite its advantages of wide versatility and high efficiency, CSP is shown to be non-robust to noise and prone to over fitting when training sample number is limited. In order to overcome these problems, Regularized Common Spatial Pattern (RCSP) is further proposed. RCSP regularized covariance matrix estimation by two parameters, which reduces the estimation difference and improves the stationarity under small sample condition. However, RCSP does not make full use of the frequency information. In this paper, we presents a filter ensemble technique for RCSP (FERCSP) to further extract frequency information and aggregate all the RCSPs efficiently to get an ensemble-based solution. The performance of the proposed algorithm is evaluated on data set IVa of BCI Competition III against other five RCSPbased algorithms. The experimental results show that FERCSP significantly outperforms those of the existing methods in classification accuracy. The FERCSP outperforms the CSP algorithm and R-CSP-A algorithm in all five subjects with an average improvement of 6% in accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recently Fractional Fourier transform (FrFT) has got a variety of applications in digital signal and image processing. This paper presents a novel hardware architecture for real-time computation of Discrete Fractional Fourier Transform (DFrFT), which can easily be extended to other fractional transforms. The proposed architecture has been verified on Xilinx FPGA(XC6VLX240T), which can run at a frequency up to 291MHz while with high accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an improved MAP decoder to be used for joint source-channel arithmetic decoding for H.264 symbols. The proposed decoder uses not only the intentional redundancy inserted via a forbidden symbol but also exploits residual redundancy by a syntax checker. A breadth-first suboptimal sequential MAP decoder is employed. The decoder eliminates paths in the decoding tree that result in invalid syntax or that decode a forbidden symbol. In contrast to previous methods, this is done as each channel bit is decoded. Simulations using intra prediction modes show improvements in error rates, for example, syntax element error rate reduction by an order of magnitude for channel SNR of 7.33dB. The cost of this improvement is more computational complexity spent on the syntax checking.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Markov logic networks which unify probabilistic graphical model and first-order logic provide an excellent framework for ontology matching. The existing approach requires a threshold to produce matching candidates and use a small set of constraints acting as filter to select the final alignments. We introduce novel match propagation strategy to model the influences between potential entity mappings across ontologies, which can help to identify the correct correspondences and produce missed correspondences. The estimation of appropriate threshold is a difficult task. We propose an interactive method for threshold selection through which we obtain an additional measurable improvement. Running experiments on a public dataset has demonstrated the effectiveness of proposed approach in terms of the quality of result alignment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Implementation of a novel indexing model for images with verified facial data is presented in this paper. This indexer uses histogram based clustering to select the skin color of the subject of the image, and then classify the image as such. Fuzzy based classification techniques are used to detect the skin tone of the images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Adaptive Gaussian Chirplet Decomposition (AGCD) is a time-frequency signal decomposition algorithm with high resolution. The Gaussian chirplet basis adopted has variable time width, frequency center with linear chirp, which has both good time and frequency energy localization. But this basis is not orthogonal, and the computation in searching basises when decomposing a signal is very huge. AGCD can reduce computation by convert the optimization process to a traditional curve-fitting problem. But the performance of the AGCD is highly dependent on the initial selection. Traditional energy based initial selection fails in some cases when two or more basis has deep cross. The proposed maximum matching based initial selection is a fast and accurate basis searching algorithm, which choose the best correlated basis each time within several candidates. Simulation results show that the new algorithm is much more stable and accurate than the energy based one without increasing computation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
X-ray pulsar-based spacecraft navigation comes to be a new kind of autonomous navigation technology with high potential, for the advantages of high reliability, good autonomy, high precision and wide applicability. Timing, determination of position and attitude are main prospects of using X-ray pulsars [1,2]. To realize the pulse signal timing, in this paper, a Phase-Locked Loop circuit for tracking pulsar signal frequency is designed; PLL is built in the Simulink environment and tested by using simple pulse signal to get circuit parameters with good track effect. The Crab Nebula pulse profile, which is used as the simulation signal source, is modelled by using the mathematical method [3]. The simulation results show that the PLL circuit designed in the paper can track the frequency of pulse signal precisely and can be used for spacecraft clock correction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Moiré methods are commonly used in various engineering metrological practices such as deformation measurements and surface topography. In the past, most of the applications required human intervention in fringe pattern analysis and image processing development to analyze the moiré patterns. In a recent application of using circular gratings moiré pattern, researchers developed graphical analysis method to determine the in-plane (2-D) displacement change between the two circular gratings by analyzing the moiré pattern change. In this work, an artificial neural network approach was proposed to detect and locate moiré fringe centers of circular gratings without image preprocessing and curve fitting. The intensity values in columns of the transformed circular moiré pattern were extracted as the input to the neural network. Moiré fringe centers extracted using graphical analysis method were used as the target for the neural network training. The neural network produced reasonably accurate output with an average mean error of an average mean error of less than 1 unit pixel with standard deviation of less than 4 unit pixels in determining the location of the moiré fringe centers. The result showed that the neural network approach is applicable in moiré fringe centers determination and its feasibility in automating moiré pattern analysis with further improvement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Three nonlinear analysis techniques, including cross-recurrence plot, line of synchronization, and cross-wavelet transform, are proposed to estimate the coherent phase vibrations of nonlinear and non-stationary time series. The case study utilizes the monthly averages of sunspot areas during the time interval from May 1874 to August 2014. The following prominent results are found: (1) the phase-leading hemisphere of long-term sunspot areas has changed twice in the past 140 years, indicating that the hemispheric imbalances and apparent phase differences on both hemispheres are a prevalent behavior and are not anomalous; (2) the alternating regularity of hemispheric asynchronism exhibits a cyclical pattern of 4.5+3.5 cycles, and the magnetic flux excess in a certain hemisphere during the ascending branch of a cycle can be taken as an indication of the phase-leading hemisphere in this cycle. We firmly believe that powerful nonlinear approaches are more advanced than classical linear methods when they are combined to determine the dynamic complexity of nonlinear physical systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Mongo-DB (from “humongous”) is an open-source document database and the leading NoSQL database. A NoSQL (Not Only SQL, next generation databases, being non-relational, deal, open-source and horizontally scalable) presenting a mechanism for storage and retrieval of documents. Previously, we stored and retrieved the data using the SQL queries. Here, we use the MonogoDB that means we are not utilizing the MySQL and SQL queries. Directly importing the documents into our Drives, retrieving the documents on that drive by not applying the SQL queries, using the IO BufferReader and Writer, BufferReader for importing our type of document files to my folder (Drive). For retrieving the document files, the usage is BufferWriter from the particular folder (or) Drive. In this sense, providing the security for those storing files for what purpose means if we store the documents in our local folder means all or views that file and modified that file. So preventing that file, we are furnishing the security. The original document files will be changed to another format like in this paper; Binary format is used. Our documents will be converting to the binary format after that direct storing in one of our folder, that time the storage space will provide the private key for accessing that file. Wherever any user tries to discover the Document files means that file data are in the binary format, the document's file owner simply views that original format using that personal key from receive the secret key from the cloud.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The association of subfigures in the multi-panel figure with related text in the accompanying caption and research article is necessary for the implementation of multi-modal information retrieval system. The panel labels in the multipanel figure are used as a source for making this kind of association. In this paper, we propose a novel method for the detection of panel labels in the multi-panel figures. The proposed method uses segmentation of multi-panel figure and its accompanying caption into subfigures and sub captions, respectively, as a preprocessing step. Next, the features of panel label, i.e., area and its distance from the borders in the upper left most subfigure of the multi panel figure are computed. These features are then used for detecting panel labels located in the rest of subfigures of the same multi-panel figure. Experiments on multi-panel figures selected from imageCLEF2013 dataset show promising results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the development of Information Technology, people have entered the era of Big Data, and the demand for intelligent information is more intense. How to make computer provide more personalized and efficient service for all walks of life, is something worth exploring. In this paper, we aim to predict user’s character by analyzing the textual content of his/her micro-blog, which is the foundation of Personalized Service. Our study describes the method of creating a prediction model about user’s character by using Bayesian algorithms. Experimental results show that the Naïve Bayes approach is a valid and promoted analytic method in micro-blog character analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hand tracking is becoming more and more popular in the field of human-computer interaction (HCI). A lot of studies in this area have made good progress. However, robust hand tracking is still difficult in long-term. On-line learning technology has great potential in terms of tracking for its strong adaptive learning ability. To address the problem we combined an on-line learning technology called on-line boosting with an off-line trained detector to track the hand. The contributions of this paper are: 1) we propose a learning method with an off-line model to solve the drift of on-line learning; 2) we build a framework for hand tracking based on the learning method. The experiments show that compared with other three methods, the proposed tracker is more robust in the strain case.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Because of simple and good performance, the block adaptive quantization (BAQ) algorithm becomes a popular method for spaceborne synthetic aperture radar (SAR) raw data compression. As the distribution of SAR data can be accurately modeled as Gaussian, the algorithm adaptively quantizes the SAR data using Llyod-Max quantizer, which is optimal for standard Gaussian signal. However, due to the complexity of the imaging target features, the probability distribution function of some SAR data deviates from the Gaussian distribution, so the BAQ compression performance declined. In view of this situation, this paper proposes a method to judge whether the data satisfies Gaussian distribution by using the geometrical relationship between standard Gaussian curve and a triangle whose area is equal to that of the Gaussian curve, then getting the coordinates of the intersection of two curves, and comparing the integral value within each node to form three judgment conditions. Finally, the data satisfying these conditions is compressed by BAQ, otherwise compressed by DPCM. Experimental results indicate that the proposed scheme improves the performance compared with BAQ method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Intra-pulse analysis plays an important role in electronic warfare. Intra-pulse feature abstraction focuses on primary parameters such as instantaneous frequency, modulation, and symbol rate. In this paper, automatic modulation recognition and feature extraction for combined BPSK-LFM modulation signals based on decision theoretic approach is studied. The simulation results show good recognition effect and high estimation precision, and the system is easy to be realized.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Intrusion detection systems have a highly significant role in securing computer networks and information systems. To assure the reliability and quality of computer networks and information systems, it is highly desirable to develop techniques that detect intrusions into information systems. We put forward the concept of statistical process control (SPC) in computer networks and information systems intrusions. In this article we propose exponentially weighted moving average (EWMA) type quality monitoring scheme. Our proposed scheme has only one parameter which differentiates it from the past versions. We construct the control limits for the proposed scheme and investigate their effectiveness. We provide an industrial example for the sake of clarity for practitioner. We give comparison of the proposed scheme with EWMA schemes and p chart; finally we provide some recommendations for the future work.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have established a DWT-based secondary self-regression model (AR(2)) to forecast stock value. This method requires the user to decide upon the trend of the stock prices. We later used WNN to forecast stock prices which does not require the user to decide upon the trend. When comparing these two methods, we could see that AR(2) does not perform as well if there are no trends for the stock prices. On the other hand, WNN would not be influenced by the presence of trends.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A multitaper spectral analysis approach is built based on the minimization of cost function. The performance analysis indicates that this approach has comparative bias and variance as the approach of discrete prolate spheroidal sequence estimator. Compared with the DPSS, instead of solving the problem of matrix eigenvalues, the multitaper here needs less calculation with analytical expression of tapers. The validity of the estimator is verified by the computer simulation of an AR progress discrete as well as a white noise sequence..
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.