PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 8878, including the Title Page, Copyright information, Table of Contents, Introduction, and Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Corner detection has been shown to be very useful in many computer vision applications. Some valid approaches have been proposed, but few of them are accurate, efficient and suitable for complex applications (such as DSP). In this paper, a corner detector using invariant analysis is proposed. The new detector assumes an ideal corner of a gray level image should have a good corner structure which has an annulus mask. An invariant function was put forward, and the value of which for the ideal corner is a constant value. Then, we could verify the candidate corners by compare their invariant function value with the constant value. Experiments have shown that the new corner detector is accurate and efficient and could be used in some complex applications because of its simple calculation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We introduced and compared ten denoisingfilters which are all proposed during last fifteen years. Especially, the
state-of-art denoisingalgorithms, NLM and BM3D, have attracted much attention. Several expansions are proposed to
improve the noise reduction based on these two algorithms. On the other hand, optimal dictionaries, sparse
representations and appropriate shapes of the transform’s support are also considered for the image denoising. The
comparison among variousfiltersis implemented by measuring the SNR of a phantom image and denoising effectiveness
of a clinical image. The computational time is finally evaluated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent studies show that wavelet-based image enhancement methods provide a high quality in enhanced images. However, images enhanced by most wavelet-based methods have less spatial resolution because the critical down-
sampling is included in wavelet transform. In this paper, we propose a useful image enhancement scheme based on nonsubsampled contourlet transform. Because edges and texture are fundamental in image representation, enhancing them is an effective means of enhancing spatial resolution. Experimental results show that the proposed enhancement scheme is able to enhance the detail and increase the contrast of the enhanced image at the same time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The distribution of image data points forms its geometrical structure. This structure characterizes the local variation, and provides valuable heuristics to the regularization of image restoration process. However, most of the existing approaches to sparse coding fail to consider this character of the image. In this paper, we address the deblurring problem of image restoration. We analyze the distribution of the input data points. Inspired by the theory of manifold learning algorithm, we build a k-NN graph to character the geometrical structure of the data, so the local manifold structure of the data can be explicitly taken into account. To enforce the invariance constraint, we introduce a patch-similarity based term into the cost function which penalizes the nonlocal invariance of the image. Experimental results have shown the effectiveness of the proposed scheme.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, proposed an image restoration method which base on the sparse constraint. Based on the principle of Compressed Sensing, the observed image is transformed into the wavelet domain, and then converted the image restoration problem to a convex set unrestricted optimization problem by limiting the number of non-zero elements of the wavelet domain, using the gradient projection method for solving the optimization problem to achieve the restoration of the input image. Experiments show that the method presented has the fast convergence and good robustness compared to the traditional total variation regularization restoration method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a modified adaptive nonlocal means (ANLM) filter is investigated for image denoising by introducing the image gradient into the classical nonlocal means filter. The proposed algorithm takes the orientation of matching neighborhood into consideration and can adaptively select the filtering parameter based on image gradient. Moreover, the symmetry or approximate symmetry of some filtered images is also considered. Therefore, comparing with the classical nonlocal means filter, the new method can exploit much more similar pixels. The proposed approach is applied to several real images corrupted by white Gaussian noise with different standard deviation. The comparative experimental results show that the improved ANLM filter obtains superior denoising performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The image preprocessing plays an important role in finger vein recognition system. However, previous preprocessing schemes remind weakness to be resolved for the high finger vein recongtion performance. In this paper, we propose a new finger vein preprocessing that includes finger region localization, alignment, finger vein ROI segmentation and enhancement. The experimental results show that the proposed scheme is capable of enhancing the quality of finger vein image effectively and reliably.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes a novel idea for ionogram trace enhancement to obtain the “clean” ionogram with real ionospheric echo signals, which is very important for further ionogram interpretation and scaling manually or automatically. Two methods based on ionogram trace pixel connectedness are adopted: max filter and connected components labeling. The experiments show that both methods are feasible and effective, and parameter selection and time complexity of the two methods are analyzed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent research has made significant progress in single image dehazing by using dark channel prior. We can directly estimate the thickness of the haze and recover a high quality haze-free image by using it. However, such method is inefficient when processing high resolution or high-bit-wide images because of its high computational complexity. Besides, the results are inaccurate when there are large white objects in the scene. A novel image prior is proposed in this paper to solve the above drawbacks. We develop a powerful and speed-up single image dehazing method by replacing the single dark channel with double dark channels with different scales to estimate the global atmospheric light and the transmission. Thus we can separate the method into two parts and ignore the soft matting that occupies 95% computation cost of the previous method. The experimental results show that our method is much faster than the original method and reduces the distortion caused by large white objects in the scene at the same time. Compared with previous method, our new single image dehazing method achieves the same, even better image quality with only around 1/23 computation time and saves lots of memory space.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With constraints to the performance of the IR detector, IR image usually has lower visual effect with low contrast and less detailed information. In this paper, a new dynamic range infrared image details enhancement algorithm is studied, using a bilateral filter to extract a base component and a detail component. Then these two components are compressed to fit the display dynamic range and then recombined to obtain the output-enhancement image. This algorithm has solved the problem of ripple phenomenon which exists in the traditional infrared image digital detail enhancement. Finally, the algorithm described in this paper is proved experimentally that can provide better DDE effect.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper presents an improved method for color image enhancement based on dynamic range compression and shadow compensation method proposed by Felix Albu et al. The improved method consists of not only the dynamic range compression and shadow compensation but also intensity contrast stretching implemented under the logarithmic image processing (LIP) model. The previous method enhance image while preserve image details and color information without generating visual artifacts. On the premise of remaining the advantages of the previous one, our improved method enhances the intensity of the whole image especially the low-light areas. The experimental results illustrate the effectiveness of our proposed method and the superiority over the previous one.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Torpedo fuze signal denoising is an important action to ensure reliable operation of fuze. Based on the good
characteristics of wavelet packet transform (WPT) in signal denoising, the paper used wavelet packet transform to
denoise the fuze signal under a complex background interference, and a simulation of the denoising results with Matlab
is performed. Simulation result shows that the WPT denoising method can effectively eliminate background noise exist
in torpedo fuze target signal with higher precision and less distortion, leading to advance the reliability of torpedo fuze
operation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
De-noise speech signal if it is noisy. Construct a wavelet according to Daubechies’ method, and derive a wavelet packet from the constructed scaling and wavelet functions. Decompose the noisy speech signal by wavelet packet. Develop algorithms to detect beginning and ending point of speech. Construct polynomial function for local thresholding. Apply different strategies to de-noise and compress the decomposed terminal nodes coefficients. Reconstruct the wavelet packet tree. Re-build audio file using reconstructed data and compare the effectiveness of different strategies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In many cases, the image acquisition devices have a limited dynamic range, which is lower than one encounter in the real world. The capture image from the camera can not reflect the high dynamic range. According to different exposure time of images in the same scene, it apply the Laplace sharpening the images and enhancing their details, and it use wavelet transform based on weighted rule of fusion to fuse those images. Consequently, the fused image retains effective information from those different exposure images. The resolution of the fusion image is better, and those images of the bright and dark area have been enhanced in the fusion image. Experiment show that the method for wavelet image fusion can fuse those multiple exposure images into a high dynamic range image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As the demand for 3DTV keep increasing these years, the conversion from exist 2D videos to 3D ones becomes a new area of research. Depth map generation plays a key point in the process. Two most important clues of depth are geometry of the scene and motion vector. This paper presents an algorithm of depth map generation, which intends to get the depth map combines two aspects of information. Compared to the previous work, our method is improved in finding vanishing point, detect motion vectors, and depth map generation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose an effective method, flexible log-polar image (FLPI) to represent quantum images sampled in log-polar
coordinate system. Each pixel is represented by three qubit sequences and the whole image is stored into a normalized
quantum superposition state. If needed, a flexible qubit sequence can be added to represent multiple images. Through
elementary operations, both arbitrary rotation transformation and similarity evaluation can be realized. We also design an
image registration algorithm to recognize the angular difference between two images if one is rotated from the other. It is
proven that the proposed algorithm could get conspicuous improvement in performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Saliency is an important feature of human visual attention. Salient regions of an image immediately attract our
attention. Therefore, attention to salient regions is an important attribute to measure image qualities. A novel image
quality metric is proposed in this paper, in which salient regions are extracted and the use of FSIM (Feature SIMilarity)
in these regions is analyzed for image quality assessment. Experimental results for a set of intuitive examples with
different distortion types demonstrate that the improved FSIM can achieve a better performance than the original form.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
After analyzing the weakness of traditional video codec methods in processing real time compression of high-speed camera images. This paper designs a video codec method based on online compressed sensing (CS). The coding part uses pseudo-random down-sampling of the two-dimensional fast Fourier transform (2D FFT) to process video frames. Meanwhile, combine approximate message passing (AMP) algorithms and three-dimensional dual-tree wavelet transform (3D DTWT) for offline decoding. Experimental results show that, this method could achieve high signal to noise ratio while simplified the coding process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The spatial position of convergent point of dual-view stereo camera is a key parameter. To solve the problem that lack of simple and effective convergent point positioning method at present, we present two methods of convergent point positioning. The first method for convergent point positioning is by observing the difference between the corresponding points of principal points in left and right images. The second method is by computing the relative extrinsic parameters between right and left cameras. The experimental results show that the first method is convenient for the stereo camera which consists of adjustable left and right cameras; the second method is convenient for the stereo camera which consists of stable left and right cameras. Both of the methods are available for convergent point positioning.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Color is often used to simplify object exaction and identification for many color-based machine vision systems. However, image colors produced by color vision system strongly depend on lighting geometry, illumination color and the spectral response function of digital camera. Either small variation in the illumination or the changes of digital camera can dramatically make the image color changed. In this paper, color correction is performed for our color vision measurement system. The mapping coefficient matrix is obtained by polynomial regression model under artificial D65 illumination and LED array illumination. The detailed correction accuracies are compared between the two common used device-independent color space (sRGB color space and CIEL*a*b*color space). sRGB color space is recommended due to the higher accuracy and simple algorithm. The correction images illustrate the usefulness of our method for color correction
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
While the rapid development in optical and storage technique enable us to easily acquire large amount of microscopy images, it is still a great challenge for biological researchers to analyze these huge data and draw meaningful and statistically sound conclusions. In this paper, we take single particle point spread function fitting problem into consideration and implement fast algorithm on the CPU+GPU heterogeneous architecture. Our approach is tested on real-world dataset and achieve about 23 ~40x faster than the traditional fitting algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Terrestrial laser scanning creates a point cloud composed of thousands or millions of 3D points. Through pre-processing, generating TINs, mapping texture, a 3D model of a real object is obtained. When the object is too large, the object is separated into some parts. This paper mainly focuses on problem of gray uneven of two adjacent textures’ intersection. The new algorithm is presented in the paper, which is per-pixel linear interpolation along loop line buffer .The experiment data derives from point cloud of stone lion which is situated in front of west gate of Henan Polytechnic University. The model flow is composed of three parts. First, the large object is separated into two parts, and then each part is modeled, finally the whole 3D model of the stone lion is composed of two part models. When the two part models are combined, there is an obvious fissure line in the overlapping section of two adjacent textures for the two models. Some researchers decrease brightness value of all pixels for two adjacent textures by some algorithms. However, some algorithms are effect and the fissure line still exists. Gray uneven of two adjacent textures is dealt by the algorithm in the paper. The fissure line in overlapping section textures is eliminated. The gray transition in overlapping section become more smoothly.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The original dynamic range of scene is generally limited to the capture sensors and display devices, and showed with the low dynamic range.Therefore, it is difficult to display the details in both dark and bright areas simultaneously. This paperadopted flexible thresholds combined with luminance map to improve the quality of image captured with unideal light distribution, and based on simple computation. The implementation effectively adjusts image contrast to both lowlight and highlight details while avoiding common quality loss due to halo-artifacts, desaturation and greyish appearance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The great advantage of Microsoft Kinect makes the depth acquisition real-time and inexpensive. But the depth maps directly obtained with the Microsoft Kinect device have absent regions and holes caused by optical factors. The noisy depth maps affect lots of complex tasks in computer vision. In order to improve the quality of the depth maps, this paper presents an efficient image inpainting strategy which is based on watershed segmentation and region merging framework of the corresponding color images. The primitive regions produced by watershed transform are merged into lager regions according to color similarity and edge among regions. Finally, mean filter operator to the adjacent pixels is used to fill up missing depth values and deblocking filter is applied for smoothing depth maps.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the progress of the age, the popularization of the computer and the internet, the text, images, photographs and varieties of multimedia will be uploaded to groups of the network space or cloud storage space by users. Thus, the multimedia data and technology have to renew and transfer by user constantly. How to search the images economically is a significant issue. This paper will focus on 3D images for in-depth investigate. It will propose an efficient 3D searching method. The analytical object is used by three-dimensional trademark gallery of the Intellectual Property Office of the Ministry of Economic Affairs, R.O.C. One three-dimensional trademark image expresses by a set of 2D images. This paper uses Harris Corner detection and combines CPDH (contour points distribution histogram) method to extract the shape feature and uses color histogram to refine the color feature. And then, the two features help to retrieve the similar 3D images. Experiment verifies that the method we proposed is effective.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The pulse description word (PDW) encoder of digital channelized reconnaissance receiver was investigated. The encoding flow of digital channelized reconnaissance receiver was introduced. The measurement methods of the pulse width of wideband frequency modulated signal crossing channels and the direction of arrival (DOA) were discussed in detail. It has been applied to reconnaissance receiver in electronic support measure (ESM) system successfully.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The WiFi network is one of the most rapidly developing wireless communication networks, which makes wireless office and wireless life possible and greatly expands the application form and scope of the internet. At the same time, the WiFi network security has received wide attention, and this is also the key factor of WiFi network development. This paper makes a systematic introduction to the WiFi network and WiFi network security problems, and the WiFi network security technology are reviewed and compared. In order to solve the security problems in WiFi network, this paper presents a new WiFi network security model and the key exchange algorithm. Experiments are performed to test the performance of the model, the results show that the new security model can withstand external network attack and ensure stable and safe operation of WiFi network.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Reversible watermarking algorithms allow extraction of the hidden information to ensure authenticity and restore
original medium. Prediction-error expansion (PEE) methods give high payload with little or no visible distortion. In
these methods performance of the algorithm relies heavily on predictor`s response. In this study image pixels are
predicted using 8 pixels within a 3×3 neighborhood of a pixel. Image pixels are divided into two sets and information
bits in each set are embedded using PEE and histogram shifting (HS). Better prediction in conjunction with HS increases
imperceptibility of the watermarked image. Experimental results indicate superior performance of the proposed scheme
in comparison to recent methods published in the literature.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Through the in-depth study on the existing mobile e-commerce and WAP protocols, this paper presents a security solution of e-commerce system based on WPKI, and describes its implementation process and specific implementation details. This solution uniformly distributes the key used by the various participating entities , to fully ensure the confidentiality, authentication, fairness and integrity of mobile e-commerce payments, therefore has some pract ical value for improving the security of e-commerce system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Existing privacy-preserving publishing models can not meet the requirement of periodical publishing for medical information whether these models are static or dynamic. This paper presents a (k,l)-anonymity model with keeping individual association and a principle based on (Epsilon)-invariance group for subsequent periodical publishing, and then, the PKIA and PSIGI algorithms are designed for them. The proposed methods can reserve more individual association with privacy-preserving and have better publishing quality. Experiments confirm our theoretical results and its practicability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
At present, XML language has been regarded as a standard of data exchange due to such attributes as being unrelevant to platform, auto-description of itself, easy extention, separation of content and form, and so on. Therefore, in the course of study of share of heterogeneous-data in coal enterprises, we design a heterogeneous-data query system based on XML to provide a unified data integration platform and share of information resource with high quality and speed, which will solve a problem with information islands produced in coal enterprises.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Aiming at characteristics of rigorous identity authentication for both two trading parties in network transactions and based on analysis and study on Hyper Elliptic Curve (HEC) and ElGamal based on discrete logarithm problem, these two approaches were combined organically to design a digital signature system based on HEC-ElGamal. It was applied in the identity authentication to provide bipolar signature verification. The security performance of this approach was also analyzed. This system of digital signature inherits security technology of two systems, so that it can reach high security index. It provides guarantee for integrity check and identity authentication.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Evaluations of both academic face recognition algorithms and commercial systems have shown that the recognition performance degrades significantly due to the variation of illumination. Previous methods for illumination robust face recognition usually involve computationally expensive 3D model transformations or optimization base reconstruction using multiple gallery face images, making them infeasible in practical large scale face identification applications. In this paper, we propose an alternative face identification framework, in which one image per person is used for enrollment as is commonly practiced in real life applications. Several probe images captured under different illumination conditions are synthesized to imitate the illumination condition of the enrolled gallery face image. We assume Lambertian reflectance of human faces and use the harmonic representations of lighting. We demonstrate satisfactory performance on the Yale B database, both visually and quantitatively. The proposed method is of very low complexity when linear facial feature are used, and is therefore scalable for large scale applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Consistency by means of image fractal dimension of the surface fractal dimension is designed and implemented based on fractal theory of image quality assessment method. Classic SSIM algorithm based on research and analysis of the factors affecting the image quality, the quality factor of the fractal, built for the blurred image quality evaluation method. Experiments show that the method of subjective and objective evaluation of the relevance, scientific evaluation of fuzzy image quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present an improved approach to another previous work on 3D hand tracking that also uses the Microsoft Kinect sensor. The previous implementation tracks the position, orientation, and full articulation from marker-less visual observations provided by Kinect. As an optimization problem, the objective of hand tracking is to minimize the difference between a hand gesture depth image obtained from Kinect and a hypothesized 3-D hand model. The previous method of relied heavily on the best current frame result, skin detection data, and depth data, often resulting in a "losttrack state" with unrecoverable error, especially when the hand moved faster than the per-frame processing speed. To recover from the lost track state, we use the skeleton joint data from Kinect to determine hand position, instead of relying on skin data. This joint data is also used to limit the search range of our Particle Swarm Optimization (PSO), allowing for a more efficient search. Consequently, the fewer generations required to obtain a result enables us to achieve higher frame-rate processing. The computationally intensive step in matching the observed hand depth with the hypothesized hand pose is accelerated using a GPGPU processor. The proposed method also improves reliability by adding a recovery mechanism for quick hand movements, eliminating the need for manual hand position initialization by a user. Our method does not depend on skin color detection and, therefore, avoids errors common in incorrect or extra skin detection. Thus, a user need not hide arm skin by wearing long-sleeve clothing, for example.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In our daily lives we often assess our surroundings to classify the situations we encounter. We do so based on the observations we make of our surroundings and information we obtain from other sources, using our knowledge and abilities. While this process is natural to us, if we want to give a similar task to a computer system then we have to take various steps in order to enable our computers to partially emulate the human capacity for observation, learning and making final decisions based on knowledge. As information complexity increases, there is an increasing demand for systems which can recognize and classify the objects presented to them.
Recently there has been an increase in interest in application of computer image analysis in various research areas. One of these applications is food quality assessment, which aims to replace traditional instrumental methods. A computer visual system was developed to assess carrot quality, based on a single variety. Characteristic qualities of the variety were chosen to describe a suitable root. In the course of the study, digital photographs of carrot roots were taken, which were used as input data for the assessment performed by a dedicated computer program created as a part of the study.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The spatiogram features have been widely used in computer vision. In this paper, in order to improve the
performance of image retrieval method based on spatiogram features, we propose a new method to measure the
spatiogram similarity in the framework of extended Gaussian Lie group model. In our method, the spatiogram features
are extracted in the HSV space. The similarity between images described by spatiogram features depends on the
distances between Gaussian probability density functions which can be calculated using Lie group theory. Based on the
framework of the extended Gaussian matrix Lie group, the contribution of the covariance matrix and the mean vector is
adjusted automatically, which ensures both the covariance matrix and the mean vector will not be ignored when
calculating the image similarity in the process of retrieval. We test our algorithm on the WANG Image Database.
Experiments show that the proposed method has a better performance than the method based on the traditional Gaussian
Lie group.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Classing object is an important step for the high-level visions processing tasks, such as security managing, and
abnormality event analysis etc. In this paper, we address these challenges in real-world unconstrained environments
where the background is complex and dynamic. In the algorithm proposed, we extract the features in a color space
technique is also developed to monitor the abnormal surface of water based on Mutation Particle Swarm Optimization
(MPSO). Our know, MPSO is one of the important evolutionary algorithms that it not only makes use of a mutation
operator to update particles/individuals which was originally designed for Genetic Algorithm (GA), but also a weighted
update rule can produce the new swarm for MPSO. Experimental results show that our algorithm works efficiently and
robustly.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the literature of neurophysiology and computer vision, global and local features have both been demonstrated to be complementary for robust face recognition and verification. In this paper, we propose an approach for face verification by fusing global and local discriminative features. In this method, global features are extracted from whole face images by Fourier transform and local features are extracted from ten different component patches by a new image representation method named Histogram of Local Phase Quantization Ordinal Measures (HOLPQOM). Experimental results on the Labeled Face in Wild (LFW) benchmark show the robustness of the proposed local descriptor, compared with other often-used descriptors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to improve the accuracy of global motion estimation (GME), a new method for GME combining with motion segmentation is proposed in this paper. The proposed method removes motion vector (MV) outliers and implements initial motion segmentation by analyzing properties of motion vectors. Using the filtered MV field, global motion parameters were estimated, and then the difference frame was generated by global motion compensation(GMC ). According to the movement difference between the background and the foreground regions and movement consistency in the same region, the absolute sum of difference frame in every block was calculated, and thus adaptively generating the threshold value to detect motion regions. MVs in the motion regions were rejected as outliers for GME, and iterative computations between GME and motion segmentation were performed successively. Experimental results demonstrate that the proposed approach can effectively extract motion regions, thus enhancing the accuracy of GME.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
How to implement the efficient query is the key problem deployed on P2P networks. This paper analyses the shortage of several query algorithm, and presents a new algorithm DDI, which means distributed searching with double indices. It discusses the popularity of documents and the linking status of the networks, and calculates the availability of the nodes in whole network, determines the route of the query process. It compares the items of time using, the quantity of requests and update information by the emulate experiments.
Along with the rapid development of computer network technology, peer-to-peer (referred to as P2P) network research has gradually become mature, and it is widely used in different fields, some large P2P computing project has entered the implementation stage. At present, many more popular software systems such as Gnutella, Freenet, Napster are deployed based on P2P technology. How to achieve effective information query has become one of the key problems of P2P research.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Researchers put forward many knowledge representation methods according to specific problems for different
domain. These methods according to their respective domain problem has different characteristic, each has its own
strong points, suitable for different questions. This paper is research on a knowledge representation method based on
extenics. Through the analysis of organizational knowledge system, the paper puts forward the related theory of
knowledge set, and with the extenics theory gives the extension representation and the correlation analysis of knowledge
set.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an economical extensible model and a new load balancing strategy based on network load capacity, so that the integrity of evidence in high-speed network forensics can be guaranteed. This algorithm gets the capture ability of evidence capture by dynamic feedback and forecast mechanism. Taking one session as a distribution unit, it distributes the network, it distributes the network packets to the host with the maximal-load capacity. The result shows the stretchability of this system can satisfy the current huge network-flow.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper researched the wetland vegetation dynamics in the Yellow River Delta by the remote sensing and geographic information system technology. The study results showed that the land area of the closed regions with sea dykes was basically stable since 1995, and which of the open regions showed the diminishing trend from 1984 to 2006. The change trend of the area of the saline vegetation had increased from 1984 to 2006 and the change rate was 7.18. The change analysis of dynamic degree of wetland vegetation landscape revealed that the succession and conversion within the wetland vegetation landscapes was also one main process. The conversion rate of farmland was more than 12% in the two periods of 1984-1995 and 1995-2006, the human farmland reclamation activities in the coastal region had been on the rise.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Taking advantage of the fusion technology of things this paper constructed a combination of hardware and software application, hardware’s major function was to collect the bus behavior data of system needed, including basic data of driver and fare bag stored in moving passive RFID tag, and the information of running status of bus on each stage perceived by all kinds of sensors in the parking area. The information which was handled by the middleware was sent to data center. The program solved the problem on the monitoring of the behavior of the bus in the parking area, meanwhile, achieved the data sharing, so as to tackle the defects of the traditional bus parking area management system’s non-automated data collection, non-real-time data presenting and poor data sharing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In contrast with single HLA federation framework, hierarchical federation framework can improve the performance of large-scale simulation system in a certain degree by distributing load on several RTI. However, in hierarchical federation framework, RTI is still the center of message exchange of federation, and it is still the bottleneck of performance of federation, the data explosion in a large-scale HLA federation may cause overload on RTI, It may suffer HLA federation performance reduction or even fatal error. Towards this problem, this paper proposes a load balancing method for hierarchical federation simulation system based on queuing theory, which is comprised of three main module: queue length predicting, load controlling policy, and controller. The method promotes the usage of resources of federate nodes, and improves the performance of HLA simulation system with balancing load on RTIG and federates. Finally, the experiment results are presented to demonstrate the efficient control of the method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the present paper, we propose a novel method for face recognition against contiguous occlusion. The general idea is to eliminate the impact of occlusions on the linear regression-based classification (LRC) method. Inspired by the level set methods that can provide smooth and closed contours as segmentation results which fit for the assumption of spatially continuity about occlusion, we show how to use the spatial continuity of pixels to segment the occluded regions. By incorporating the idea of level set based image segmentation into the LRC, the proposed approach is capable of reliably determining the occluded regions and removing them from LRC framework. Extensive experiments on publicly available databases (Extended Yale B and AR) show the efficacy of the proposed approach against different types of occlusion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Mobile reading is the trend of current publishing industry. Intelligent Recommendation system is useful and profitable for mobile reading platforms. Currently, intelligent recommendation systems mainly focus on news recommendation or production recommendation in e-commerce. In this paper, we designed and implemented an intelligent recommendation system based on slope one algorithm. Results show that our algorithm can help the users to find their interested books and thus greatly improve the income of mobile reading platform.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Agricultural products typically exhibit high variance in quality characteristics. To assure customer satisfaction and control manufacturing productivity, quality classification is necessary to screen off defective items and to grade the products. This article presents an application of image processing techniques on squid grading and defect discrimination. A preliminary study indicated that surface color was an efficient determinant to justify quality of splendid squids. In this study, a computer vision system (CVS) was developed to examine the characteristics of splendid squids. Using image processing techniques, squids could be classified into three different quality grades as in accordance with an industry standard. The developed system first sifted through squid images to reject ones with black marks. Qualified squids were graded on a proportion of white, pink, and red regions appearing on their bodies by using fuzzy logic. The system was evaluated on 100 images of squids at different quality levels. It was found that accuracy obtained by the proposed technique was 95% compared with sensory evaluation of an expert.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To improve the operation level of copper converter, the approach to optimal decision making modeling for coppermatte converting process based on data mining is studied: in view of the characteristics of the process data, such as containing noise, small sample size and so on, a new robust improved ANN (artificial neural network) modeling method is proposed; taking into account the application purpose of decision making model, three new evaluation indexes named support, confidence and relative confidence are proposed; using real production data and the methods mentioned above, optimal decision making model for blowing time of S1 period (the 1st slag producing period) are developed. Simulation results show that this model can significantly improve the converting quality of S1 period, increase the optimal probability from about 70% to about 85%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Association rule mining is an essential knowledge discovery method that can find associations in database. Previous studies on association rule mining focus on finding quantitative association rules from certain data, or finding Boolean association rules from uncertain data. Unfortunately, due to instrument errors, imprecise of sensor monitoring systems and so on, real-world data tend to be quantitative data with inherent uncertainty. In our paper, we study the discovery of association rules from probabilistic database with quantitative attributes. Once we convert quantitative attributes into fuzzy sets, we get a probabilistic database with fuzzy sets in the database. This is theoretical challenging, since we need to give appropriate interest measures to define support and confidence degree of fuzzy events with probability. We propose a Shannon-like Entropy to measure the information of such event. After that, an algorithm is proposed to find fuzzy association rules from probabilistic database. Finally, an illustrated example is given to demonstrate the procedure of the algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A continuous time model is proposed in this paper, dealing with the decentralized decision-making of multi-products promotion, under the condition of moral hazard. The main result is that when the revenue function and the movement law satisfy certain conditions, the optimal efforts level of the division in continuous time is the same with that in the static case, even though both are distorted compared with the optimal efforts level under complete information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the development of remote sensing technology, getting the image data more and more, how to apply and manage the mass image data safely and efficiently has become an urgent problem to be solved. According to the methods and characteristics of the mass remote sensing image data management and application, this paper puts forward to a new method that takes Oracle Call Interface and Oracle InterMedia to store the image data, and then takes this component to realize the system function modules. Finally, it successfully takes the VC and Oracle InterMedia component to realize the image data storage and management.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper studies on the image registration of the medical images. Wavelet transform is adopted to decompose the medical images because the resolution of the medical image is high and the computational amount of the registration is large. Firstly, the low frequency sub-images are matched. Then source images are matched. The image registration was fulfilled by the ant colony optimization algorithm to search the extremum of the mutual information. The experiment result demonstrates the proposed approach can not only reduce calculation amount, but also skip from the local extremum during optimization process, and search the optimization value.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The remote sensing image has been widely used in AutoCAD, but AutoCAD lack of the function of remote sensing image processing. In the paper, ObjectARX was used for the secondary development tool, combined with the Image Engine SDK to realize remote sensing image pixel attribute data acquisition in AutoCAD, which provides critical technical support for AutoCAD environment remote sensing image processing algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
ASP.NET technology was used to construct the B/S mode image query system. The theory and technology of database design, color feature extraction from image, index and retrieval in the construction of the image repository were researched. The campus LAN and WAN environment were used to test the system. From the test results, the needs of user queries about related resources were achieved by system architecture design.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents four methods for selective video encryption based on the MPEG-2 video compression,including
the slices, the I-frames, the motion vectors, and the DCT coefficients. We use the AES encryption method for simulation
experiment for the four methods on VS2010 Platform, and compare the video effects and the processing speed of each
frame after the video encrypted. The encryption depth can be arbitrarily selected, and design the encryption depth by
using the double limit counting method, so the accuracy can be increased.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multi-modality image registration plays an important role in the domain of medical image processing. Diffeomorphic demons method has been proven to be a robust and efficient way for single mode image registration. However, it cannot deal with multi-modality image. In this paper we introduce mutual information into diffeomorphic demons method. On the basis of original force for driving image deformation, the proposed method adds mutual information gradient on the current transformation and adds mutual information into the energy function. We compare the performance of image registration results among our proposed method, diffeomorphic demons method and B-spline based free form deformation method in combination with mutual information. Experiment shows that our proposed method gives the better results like the smallest registration errors in case of local distortions. In conclusion, our proposed method has good performance in dealing with local deformation multi-model image registration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
High-Intensity Focused Ultrasound treatment combined with magnetic resonance technology (MRI-guided HIFU,
MRgHIFU) can protect the thermal ablation without harming the surrounding tissue by using MRI for target positioning,
where image registration plays an important role in the implementation of precise treatment. In this paper, we apply
three-dimension free-form deformation non-rigid registration on treatment plan amendments and tracking of breast
cancer. Free-form deformation based and demons based non-rigid registration are respectively employed on breast
cancer MR imaging required at different times before and after for comparison. The results of the experiments show that
the registration performed on the breast tumor image data with slight and larger deformation is effective, and the mutual
information of the ROI increased from 1.49 before registration to 1.53.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With limited dynamic range, images acquired by ordinary image sensors can not cover all information of the given scenario. In order to acquire a high dynamic range picture which contains both light parts and dark parts, this article presents one method improving dynamic range of images from given scenes by using double exposure. The principle of this algorithm goes as follows: one scene is confronted with two exposures by the same sensor, then image data got from exposure will be used for image fusion to enlarge the dynamic range. As to over-exposure and under-exposure in images, the algorithm enhances the contrast between them and displays both of them. This algorithm can work in a fast speed within 23 ms to fuse two 512*512 images, and can work in high dynamic range circumstances, which means it can adjust essential values according to different scenes to achieve a better fusion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The block adaptive quantization (BAQ) algorithm is comparatively mature for SAR raw data compression at present. This algorithm is on the premise that SAR raw data should satisfy Gauss distribution. But the imaged region is quite rugged, some blocks of data doesn’t satisfy Gaussian distribution. Therefore, a block adative scalar-vector quantization (BASVQ) algorithm is put forward in this paper, namely, scalar quantization is applied when data blocks satisfy Gaussian distribution while vector quantization is applied when doesn’t satisfy. The experiments demonstrate that the performance of BASVQ algorithm outperforms that of BAQ algorithm. The BASVQ algorithm has practical value in some degree.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper discusses a data hiding application using steganography. The purpose of this paper is to introduce a two
phase steganography application that allows the user to first compress the information into ASCII form using our novel compression technique (compression technique utilizing reference points coding) then embed the information into the carrier using least significant bit (LSB) algorithms. An additional function to this compression method is that it will also allow the user to first choose whether to compress the information in lossy or lossless format before embedding the information into the carrier. The flexibility of this compression technique will allow the user to hide information with a customizable compression phase. By reducing the size for the hidden message, the required bits to embed the message are lessened, and thus the possibility of human visual detection will also be reduced.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an efficient image encryption scheme for color images based on quantum chaotic systems. In this scheme, a new substitution/confusion scheme is achieved based on toral automorphism in integer wavelet transform by scrambling only the Y (Luminance) component of low frequency subband. Then, a chaotic stream encryption scheme is accomplished by generating an intermediate chaotic key stream image with the help of quantum chaotic system. Simulation results justify the feasibility of the proposed scheme in color image encryption purpose.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A complex research project was undertaken by the authors to develop a method for the automatic identification of grasslands using the neural analysis of aerial photographs made from relative low altitude. The development of such method requires the collection of large amount of various data. To control them and also to automate the process of their acquisition, an appropriate information system was developed in this study with the use of a variety of commercial and free technologies. Technologies for processing and storage of data in the form of raster and vector graphics were pivotal in the development of the research tool.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present a novel control approach which is fusion techniques of factory automation/management and
functions of hardware for computer communication. We describe the communication techniques between computer and
programmable logic controller based on special system protocols. The developed controller innovated prior knowledge
from some files, is considered as guides for the relevant controller designers and programmers, and a reference on
establishing practical database for intelligent agents and on communication tasks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Three dimension medical images have been playing an irreplaceable role in realms of medical treatment, teaching, and research. However, collaborative processing and visualization of 3D medical images on Internet is still one of the biggest challenges to support these activities. Consequently, we present a new application approach for web-based synchronized collaborative processing and visualization of 3D medical Images. Meanwhile, a web-based videoconference function is provided to enhance the performance of the whole system. All the functions of the system can be available with common Web-browsers conveniently, without any extra requirement of client installation. In the end, this paper evaluates the prototype system using 3D medical data sets, which demonstrates the good performance of our system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Very Long Baseline Interferometry (VLBI) has been successfully used in many deep space exploration projects over the past decades. Because it does not lose sensitivity to spacecraft declination when the spacecraft is near earth’s equatorial plane, as is the case for Doppler tracking. Differential One-way Ranging (DOR) can improve the accuracy of delay through wide spanned bandwidth. In this paper, performance of DOR experiment is analyzed, accuracy of results can satisfy China CE’2 navigation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
3D data is easier to acquire for family entertainment purpose today because of the mass-production, cheapness and portability of domestic RGBD sensors, e.g., Microsoft Kinect. However, the accuracy of facial modeling is affected by the roughness and instability of the raw input data from such sensors. To overcome this problem, we introduce compressive sensing (CS) method to build a novel 3D super-resolution scheme to reconstruct high-resolution facial models from rough samples captured by Kinect. Unlike the simple frame fusion super-resolution method, this approach aims to acquire compressed samples for storage before a high-resolution image is produced. In this scheme, depth frames are firstly captured and then each of them is measured into compressed samples using sparse coding. Next, the samples are fused to produce an optimal one and finally a high-resolution image is recovered from the fused sample. This framework is able to recover 3D facial model of a given user from compressed simples and this can reducing storage space as well as measurement cost in future devices e.g., single-pixel depth cameras. Hence, this work can potentially be applied into future applications, such as access control system using face recognition, and smart phones with depth cameras, which need high resolution and little measure time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The most popular methods in object retrieval are almost based on bag-of-words(BOW) which is both effective and efficient. In this paper we present a method use the relations between words of the vocabulary to improve the retrieval performance based on the BOW framework. In basic BOW retrieval framework, only a few words of the vocabulary is useful for retrieval, which are spatial consistent in images. We introduce a method to useful select useful words and build a relevance between these words. We combine useful relevance with basic BOW framework and query expansion as well. The useful relevance is able to discover latent related words which is not exist in the query image, so that we can get a more accurate vector model for retrieval. Combined with query expansion method, the retrieval performance are better and fewer time cost.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
According to the character structural features, such as fixed width of character and proportion relationship of character spacing, a novel method for characters segmentation of vehicle license plate based on projection is proposed. The algorithm can not only locate the optimal segmentation point between characters quickly, but also remove the interference of image noises and license plate frame. The experimental results show the good performance of the segmentation algorithm and it can be used with all kinds of license plate.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Color is an important visual feature of images. However, a major drawback of color histogram is that it will loss spatial information and lead to false retrieval. In this paper, we present a "Back"-shape regional division approach and combine with pyramid histogram of orientated gradients (PHOG) to extract image edge features, termed refined edge histogram (REH). Moreover, the REH descriptor is applied to color image retrieval. Experimental results show that the proposed EDH are suitable for color image retrieval and has higher precision and recall compared to other existing methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes method through the combination of ranging the C-type traveling wave and the analysis and selection of line to the component of the line mode to locate the fault point. Inject into high amplitude and narrow signals in the beginning of a line and detect circuit returning from the arrival time of the waveform. Comparing of the normal waveform and failure waveform in both cases, receive the reflected wave fault at the arrival time. And then determine the fault distance. Using the effect from the fault which generate the shocks of the traveling wave and comparing the shock time of each branch line, the line which acquires the longest duration of vibration is the fault line. Through the theoretical analysis, Matlab simulation and effective analysis of the selected data, this paper proves the correctness of the method and demonstrate that the method of fault location in distribution networks is practical.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Based on the Maximum Degree Construction algorithm, a new select algorithm is proposed in this paper. In the algorithm, each node and its neighbors issue the certificates each other to generate the local In-degree and Out-degree certificate repository. Similar to the ant colony algorithm, it finds the certificate chain between the source node and destination node by selecting the node of the maximum certificated times from the beginning. The algorithm reduces the complexity of the selection, provides a guarantee to find the certificate chain, and saves the spending of space as well. Next, this paper gives the simulation of the algorithm and the simulated results show that this is an optimized select algorithm for local certificate repository.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The subject of the project was the selection of neural models for the identification of physical parameters of grain
quality regarding to malting barley. Help in its implementation was the original computer system, "Hordeum v 2.0", in
which graphic data was gained from digital images of kernels obtained by acquisition. The principal aim was to verify
whether the artificial neural networks in combination with computer image analysis can become a practical tool used in
farming, and whether the proposed technology can be applied in analysing the quality of cereal grains.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A video-based handwritten signature verification framework is proposed in this paper. Using a camera as the sensor has the advantage that the entire writing processes can be captured along with the signatures. The main contribution of this work is that writing postures are analyzed to achieve the verification purpose because the writing postures cannot be easily imitated or forged. The proposed system is able to achieve low false rejection rates while maintaining low false acceptance rates for database containing both unskilled and skilled imitation signatures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a multi-region level set image segmentation method based on image energy separation model. The image feature is extracted by using the image energy decomposition method. We represent the regions by the level set functions with constraint. The coupled Partial Differential Equations (PDE) related to the minimization of the functional are considered through a dynamical scheme. A modified region competition factor is adopted to speed up the cure evolution functions, it also guarantees no vacuum and non-overlapping between the neighbor regions. Several experiments are conducted on both synthetic images and natural image. The results illustrate that the proposed multiregion segmentation method is fast and less sensitive to the initializations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new method for moving object segmentation based on human vision perception in infrared video is proposed. In this paper, we introduce a new region growing method to achieve the accurate and complete segmentation of the moving objects. At first, the ideal seeds of every moving object are extracted based on the “hole” effect of temporal difference, respectively. At the next step, on the basis of the consideration that human vision system (HVS) is most sensitive to the local contrast between targets and surrounding, we proposed a metric for “good” infrared target segmentation based on human vision perception. And according to this metric, a search method based on fine and rough adjustment is applied to determine the best growing threshold for every moving object. The segmented mask of every moving object is grown from the relevant seeds with the best growing threshold. At last, the segmented masks of all moving objects are merged into a complete segmented mask. Experimental results show that the proposed method is superior and effective on segmentation of moving object in infrared video.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
New century, cabin design has become an important factor affecting the compact capability of modern naval vessels. Traditional cabin design, based on naval rules and designer’s subjective feeling and experience, holds that weapons and equipments are more important than habitability. So crew’s satisfaction is not high to ships designed by traditional methods. In order to solve this problem, the method of multiple attribute group decision-making was proposed to evaluate the cabin design projects. This method considered many factors affecting cabin design, established a target system, quantified fuzzy factors in cabin design, analyzed the need of crews and gave a reasonable evaluation on cabin design projects. Finally, an illustrative example analysis validates the effectiveness and reliability of this method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A sampling optimization method based on color difference analysis was proposed in this paper. Firstly, three color sets--respectively a super set used to simulate the whole CMYK color space, a test set for characterization accuracy verification and an initial characterization set were defined and created. Secondly, the colorimetric values of test set can be predicted according to the characterization results of the current characterization set. Thirdly, by analyzing the color difference of test set, 10% samples with larger color difference were selected as the larger color difference set to carry on optimization. After that, the samples in the super set which are closest to the larger color difference set were found and added to the characterization set. Finally, cyclic optimization was conducted until the characterization accuracy meets the given requirements. Experimental results showed a significant reduction in the number of samples with an improvement of characterization accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There are fractures in local regions after extracting outlines of cone beam CT(CBCT) mandible images by conventional segmentation algorithm in the forensic test, therefore, this paper proposes a new method to avoid negative impact of fractures by the Erosion-reconstruction and Dilation- reconstruction of mathematics morphology (ERDR) algorithm to improve the accuracy of auto-extracting mandible outlines. The experiments show that the ERDR had a higher success rate (82.3%) in the processing of extracting the outlines of 300 mandible images than that of conventional segmentation method(24.0%).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The research of fractional-order control system was more and more extensive in the field of control theory, while the simulation is an important means of the control study. In this paper, fractional-order systems and fractional-order PIλ Dμ controller were introduced. Then certain simulations were used to analyze the influence of variations of PIλ Dμ controllers’ parameters and the order of integrator and differentiator to the performances of fractional-order control systems. At last, some useful conclusions were given about the merit of the fractional-order PIλ Dμ controllers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Disparity estimation is a popular and important topic in computer vision and robotics. Stereo vision is commonly done to complete the task, but most existing methods fail in textureless regions and utilize numerical methods to interpolate into these regions. Monocular features are usually ignored, which may contain helpful depth information. We proposed a novel method combining monocular and stereo cues to compute dense disparities from a pair of images. The whole image regions are categorized into reliable regions (textured and unoccluded) and unreliable regions (textureless or occluded). Stable and accurate disparities can be gained at reliable regions. Then for unreliable regions, we utilize k-means to find the most similar reliable regions in terms of monocular cues. Our method is simple and effective. Experiments show that our method can generate a more accurate disparity map than existing methods from images with large textureless regions, e.g. snow, icebergs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a network identity authentication protocol of bank account system based on fingerprint identification and mixed encryption. This protocol can provide every bank user a safe and effective way to manage his own bank account, and also can effectively prevent the hacker attacks and bank clerk crime, so that it is absolute to guarantee the legitimate rights and interests of bank users.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A fundamental challenge in image engineering is how to locate interested objects from high-resolution images with efficient detection performance. Several man-made objects detection approaches have been proposed while the majority of these methods are not truly timesaving and suffer low degree of detection precision. To address this issue, we propose a novel approach for man-made object detection in aerial image involving MapReduce scheme for large scale image analysis to support image feature extraction, which can be widely used to compute-intensive tasks in a highly parallel way, and texture feature extraction and clustering. Comprehensive experiments show that the parallel framework saves voluminous time for feature extraction with satisfied objects detection performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the basic study and analysis of the human face detection and feature point location and tracking algorithm in video sequence, this paper proposes a method that first to determine the like-face area in the video frame with local SMQT characteristics; then positioning the detected human face feature point with the modified ASM, which is improved by changing the 1D texture model which is easier to fall into minimum to 2D texture model; finally, grouping feature points based on their characteristics, tracking them by using optical flow method, elastic graph matching, and binary respectively. This method was tested to show good positioning of facial features based on fast detection, and gain well tracking results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In general, there have methods to detect object removal event, but when people blocking the object an object, it is possible detect the event as removed. Thus, if we can make the system can classify between occlusion event and removal event, this can increase the accuracy and performance of the system. In this paper, we present a method that can detect and classify object occlusion and object removal event. The detection and classification uses Canny edge detector, and the classification can be done by determining the edge similarity of the object between background and current image. The system is tested in different places and it gives an acceptable and satisfied result.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A spectral filtering based method for top-down spatiotemporal saliency detection is proposed. The proposed method enables to favor the salient features of the target object needed to pop out. Here a feature vector representing the salient features of the target object is learned online within the first image in which it is detected or initialized manually. The proper scale of the Gaussian kernel for spectral filtering is selected automatically according to the size ratio of the whole image to the target object. Guided by the top-down information, a top-down, target-related saliency map can be built in subsequent images. This enables to focus on the most relevant salient region and can be extended to complicated computer vision tasks. Experiment results demonstrate the effectiveness of the proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A novel action recognition framework based on integrated model is proposed in the paper. First, the covariance descriptor is utilized to extract features from video sequences, and then each class specific codebook is constructed and appended to the global codebook. A static model applying the template matching technique and a dynamic model employing the trigram model are learned to capture complementary information in an action. And lastly, an integrated model is used to estimate the confidence of the static and dynamic models and produces a reliable result. Comparative experiments show that our presented method achieves superior results over other state-of-the-art approaches. Keywords: human action recognition, covariance descriptor, integrated model
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Some edge detection algorithms based on spatial and wavelet technology can detect image edge of limited direction effectively. Since these algorithms haven't fully utilized field information, there would be great error around the complex edge area in the detection results. To solve such kind of problem, a novel edge detection algorithm by combining nonsub- sampled contourlet transform (NSCT) and Canny algorithm is proposed in this paper. Simulation results are displayed to prove that this algorithm can extract more image edge details than Canny algorithm and it also has good continuity and robustness.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a space Markov Random Field (MRF) model to detect abnormal activities in crowded
scenes. The nodes of MRF graph consist of monitors evenly spread on the image, and neighboring nodes in space are
associated with links. The normal patterns of activity at each node are learnt by constructing a Gaussian Mixture Model
(GMM) upon optical flow locally, while correlation between adjacent nodes is represented by building a single Gaussian
model upon inner product of histogram vectors of optical flow observed from a region centered at each node respectively.
For any optical flow patterns detected in test video clips, we use the learnt model and MRF graph to calculate an energy
value for each local node, and determine whether the behavior pattern of the node is normal or abnormal by comparing
the value with a threshold. Further, we apply a method similar to updating of GMM for background subtraction to
incrementally update the current model to adapt for visual context changes over a long period of time. Experiments on
the published UCSD anomaly datasets Ped1 and Ped2 show the effectiveness of our method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Composting is one of the best methods for management of sewage sludge. In a reasonably conducted composting process it is important to early identify the moment in which a material reaches the young compost stage. The objective of this study was to determine parameters contained in images of composted material’s samples that can be used for evaluation of the degree of compost maturity. The study focused on two types of compost: containing sewage sludge with corn straw and sewage sludge with rapeseed straw. The photographing of the samples was carried out on a prepared stand for the image acquisition using VIS, UV-A and mixed (VIS + UV-A) light. In the case of UV-A light, three values of the exposure time were assumed. The values of 46 parameters were estimated for each of the images extracted from the photographs of the composted material’s samples. Exemplary averaged values of selected parameters obtained from the images of the composted material in the following sampling days were presented. All of the parameters obtained from the composted material’s images are the basis for preparation of training, validation and test data sets necessary in development of neural models for classification of the young compost stage.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A novel image classification method is proposed based on the sparse representation. The initial dictionary consists of the feature patches obtained through the feature extraction. The K-SVD algorithm is adopted to update the dictionary. Each dictionary is learned from the images of each category, and the images of this category can be represented sparsely over this dictionary. The classification can be achieved in terms of the projection error. Experimental results show that the proposed method achieves the comparable performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper concerns the method for checking the status of isolators and is applied in the sequence control in smart substation based on SmartGuard--a mobile inspection robot for substations. It can recognize the status of an isolator through analyzing its feature. We could get the homography matrix by using the SIFT feature between the template image and new acquired image, then get the range of isolator, finally recognize the status of isolator by image processing. The experiment of results proved that the method could recognize isolator status effectively. The substation realizes one key sequence control system through this technology based SmartGuard.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new method for fabric defect detection is proposed. It is based on the filter group containing four annular Gaussian band-pass filters and aimed at detecting defects in fabrics with plain and twill structures. A fabric sample image is processed by using this filter group and the filtered images are obtained. These filtered images are binarized and fused in order to reconstruct the defect binary image that distinguishes defects from texture background. In the performance evaluation and comparison experiments, this method is applied to a variety of fabrics with various defects. Experiment results have confirmed this method has good real time performance and is effective in fabric defect detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In real-time video surveillance system, background noise and disturbance for the detection of moving objects will have a significant impact. The traditional Gaussian mixture model(GMM)has strong adaptive various complex background ability, but slow convergence speed and vulnerable to illumination change influence. the paper proposes an improved moving target detection algorithm based on Gaussian mixture model which increase the convergence rate of foreground to the background model transformation and introducing the concept of the changing factors, through the three frame differential method solved light mutation problem. The results show that this algorithm can improve the accuracy of the moving object detection, and has good stability and real-time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A novel method for detecting and tracking vehicles is proposed. The method which based on motion object segmentation used Cellular Neural Network (CNN) in the background substraction for motion detection in order to distinguish the vehicles from others of the interested regions. Meanwhile a tracking method based on regional characteristic matching is proposed, by which the distance between characteristic vectors can be used to match current motion regions and track the vehicles. Perceptual grouping refers to the organization ability that visual system detect image features in accordance with certain cues such as proximity, continuity, closure, etc, and attracts wide attentions and high regards in computer vision. In this paper, we proposed a new approach for occlution elimination by combining perceptual grouping with Optical flow field. Experimental results show that the methods can extract traffic information with high accuracy and efficiency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
IG method is an excellent salient region detection method as its good generality and well-defined boundaries. In this
paper, an improved method based on IG method is proposed to generate saliency map for phytoplankton microscopic
images. This method utilizes the characteristics of phytoplankton microscopic images, through Gaussian low-pass filter
to reduce high frequency components corresponding to water stains and dust specks. On the basis of luminance and color
used in IG method, saturation is added to determine saliency due to that the saturation of background is lower than that
of cells. The experimental results show that the proposed method can not only improve visual quality significantly, but
also obtain higher precision and better recall rates compared with IG method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The project aimed to produce a classification model of neural network that would allow automatic evaluate quality of greenhouse tomatoes. The project used computer image analysis and artificial neural networks. Authors based on the analysis of biological material selected set of features that are describing the physical parameters allowing the quality class identification. Image analysis of tomatoes digital photographs samples allowed to choose characteristics features. Obtained characteristics from the images were used as learning data for artificial neural network.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The aim of this work was a neural identification of selected apple tree orchard pests. The classification was conducted on the basis of graphical information coded in the form of selected geometric characteristics of agrofags, presented on digital images. A neural classification model is presented in this paper, optimized using learning sets acquired on the basis of information contained in digital photographs of pests. In particular, the problem of identifying 6 selected apple pests, the most commonly encountered in Polish orchards, has been addressed. In order to classify the agrofags, neural modelling methods were utilized, supported by digital analysis of image techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multimodal biometrics based on the finger identification is a hot topic in recent years. In this paper, a novel fingerprint-vein based biometric method is proposed to improve the reliability and accuracy of the finger recognition system. First, the second order steerable filters are used here to enhance and extract the minutiae features of the fingerprint (FP) and finger-vein (FV). Second, the texture features of fingerprint and finger-vein are extracted by a bank of Gabor filter. Third, a new triangle-region fusion method is proposed to integrate all the fingerprint and finger-vein features in feature-level. Thus, the fusion features contain both the finger texture-information and the minutiae triangular geometry structure. Finally, experimental results performed on the self-constructed finger-vein and fingerprint databases are shown that the proposed method is reliable and precise in personal identification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper address the problem of detecting visual objects in images which is a fundamental problem in computer vision. We proposed a method based on matching a sample of object with all sub-windows in the testing images to solve this problem instead of training a classifier to determine the location of visual objects. Local histogram of gradient(LHOG) feature are extracted from the sample image and testing images respectively to describe patterns in the images. Integral image technique are employed to accelerate the process of calculating LHOG feature. Then, we apply PCA to reduce the dimensionality of LHOG feature. Distance between sample image and sub-windows are measured by using cosine angle. Adaptive strategy is used to distinguish the object sub-window from non-object sub-window.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a new representation for image classification based on spatial correlogram approach. Spatial correlogram captures spatial co-occurrences of pairwise codewords. This representation augments traditional bag-of-features model by adding spatial information into it and compresses the information contained in a correlogram without loss of discriminative power. For the purpose of increasing classification accuracy, we combine the correlogram with spatial pyramid. In a number of image classification experiments, we find that, the proposed method reaches good performance and high accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a proposed method for the detection of welds defects from radiographic images. Firstly, the radiographic images were enhanced using Adaptive Histogram Equalization and were filtered using Mean and Wiener filters. Secondly, the welding area was selected from the radiography image. Thirdly, the images were converted to signals then the features were extracted from the Bispectrum of these signals. Finally, neural networks were used for training and testing the proposed method. The proposed model was tested on 100 radiography images in the presence of noise and image blurring. Results show that the proposed model yields best results for the detection of weld defects in radiography images when using the Bispectrum method estimated by Autoregressive moving average (ARMA) method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a method for surface detection of Infrared Image for plume. First of all, the binarization of Infrared Image is achieved by Min-Max Method, then edge detection operator is constructed based on Multi-scale and Multi-form morphological structural element of mathematical morphology, to detect the edge of Engine’s plume of Infrared Image. Results showed that our method has better capability in depressing the noise interference comparatively to the traditional edge detection operator, and meet the real-time requirement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automotive and advanced driver assistance systems have attracted a great deal of attention lately. In these systems,
effective and reliable vehicle detection is important because such systems can reduce the number of accidents and save
human’ lives. This paper describes an approach to detecting a forward vehicle using a camera mounted on the moving
vehicle. In this paper, we describe two methods to detect a vehicle on the road. First, by using the vehicle’s shadow, we
can obtain the general location of the vehicular candidate. Second, we can identify the strong vertical edges at the left
and right position of a vehicle. By combining the shadows and the edge, we can detect the vehicle’s location. But other
regions may also be detected, such as car windows, reflections, and illumination by the sun. In order to remove these
other factors, defined as noises, we need to use a filter. After using the filter, we can calculate the exact location of the
vehicle. Additionally, by using connected component labeling, we can obtain coordinates and establish the vehicle’s
location. Connected component labeling find all connected components in an image and assigns a unique label to all
points in the same component. These methods are very useful for vehicle detection and the development of the driving
assistance systems, and they can protect drivers’ safety from having an accident.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to obtain the exact center of an asymmetrical and semicircular aperture laser spot, a method for laser spot detection method based on circle fitting was proposed in this paper, threshold of laser spot image was segmented by the method of gray morphology algorithm, rough edge of laser spot was detected in both vertical and horizontal direction, short arcs and isolated edge points were deleted by contour growing, the best circle contour was obtained by iterative fitting and the final standard round was fitted in the end. The experimental results show that the precision of the method is obviously better than the gravity model method being used in the traditional large laser automatic alignment system. The accuracy of the method to achieve asymmetrical and semicircular laser spot center meets the requirements of the system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For efficient object cover edge detection in digital images, this paper studied traditional methods and algorithm based on SVM. It analyzed Canny edge detection algorithm existed some pseudo-edge and poor anti-noise capability. In order to provide a reliable edge extraction method, propose a new detection algorithm based on FSVM. Which contains several steps: first, trains classify sample and gives the different membership function to different samples. Then, a new training sample is formed by increase the punishment some wrong sub-sample, and use the new FSVM classification model for train and test them. Finally the edges are extracted of the object image by using the model. Experimental result shows that good edge detection image will be obtained and adding noise experiments results show that this method has good anti-noise.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a novel method for facial expression recognition based on non-linear manifold techniques. The
graph-based algorithms are designed to treat structure in data, and regularize accordingly. This same goal is shared by
several other algorithms, from linear method principal components analysis (PCA) to modern variants such as Laplacian
eigenmaps. In this paper we focus on manifold learning for dimensionality reduction and clustering using Laplacian
eigenmaps for facial expression recognition. We evaluate the algorithm by using all the pixels and selected features
respectively and compare the performance of the proposed non-linear manifold method with the previous linear manifold
approach, and the non linear method produces higher recognition rate than the facial expression representation using
linear methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Moving object detection is a most important preliminary step in video analysis. Some moving objects such as spitting steam, fire and smoke have unique motion feature whose lower position keep basically unchanged and the upper position move back and forth. Based on this unique motion feature, a swaying object detection algorithm is presented in this paper. Firstly, fuzzy integral was adopted to integrate color features for extracting moving objects from video frames. Secondly, a swaying identification algorithm based on centroid calculation was used to distinguish the swaying object from other moving objects. Experiments show that the proposed method is effective to detect swaying object.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a novel method based on visual saliency and template matching for detecting vehicle logo from images captured by cross-road cameras. To detect the logo, such method first generates a saliency map based on the modified Itti’s saliency model, and then obtains regions of interest (ROI) by thresholding the saliency map, at last performs an edge-based template matching to locate the logo. Experiments on more than 2400 images validate both high accuracy and efficiency of the proposed method, and demonstrates our method suitable for real-time application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The automatic speech emotion recognition has important applications in human-machine communication. Majority of current research in this area is focused on finding optimal feature parameters. In recent studies, several glottal features were examined as potential cues for emotion differentiation. In this study, a new type of feature parameter is proposed, which calculates energy entropy on values within selected Wavelet Packet frequency bands. The modeling and classification tasks are conducted using the classical GMM algorithm. The experiments use two data sets: the Speech Under Simulated Emotion (SUSE) data set annotated with three different emotions (angry, neutral and soft) and Berlin Emotional Speech (BES) database annotated with seven different emotions (angry, bored, disgust, fear, happy, sad and neutral). The average classification accuracy achieved for the SUSE data (74%-76%) is significantly higher than the accuracy achieved for the BES data (51%-54%). In both cases, the accuracy was significantly higher than the respective random guessing levels (33% for SUSE and 14.3% for BES).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The accurate identification of the different protocols used by various applications plays an important role in many network management and monitoring tasks. However, the development of emerging applications and the evolution of existing applications have made the early success of port number or payload signature based classification methods no longer repeatable. On the other hand, machine learning based approaches have achieved steady progress in classification accuracy, with the statistical features extracted from packets and flows. In this paper, by introducing a Markov random field to model the semantics of network application protocols, we investigate a new approach to classifying network traffic into application protocols. First the packets in a flow are aggregated into messages that contain the related semantics information. We assume that the simple message features like the length and the direction of a message are observable, while the semantics of messages are invisible in both training and test phases. Tested with traffic traces collected from heterogeneous sources, this approach was demonstrated to be able to deliver good accuracy and speed.