PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 9069, including the Title Page, Copyright information, Table of Contents, Introduction, and Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Breast cancer occurs with high frequency among women. In most cases, the main early signs appear as mass and
calcification. Distinguishing masses from normal tissues is still a challenging work as mass varies with shapes, margins
and sizes. In this paper, a novel method for mass detection in mammograms was presented. First, morphology operators
are employed to locate mass candidates. Then anisotropic diffusion was applied to make mass region display better
multiple concentric layers (MCL). Finally an extended concentric morphology model (ECMM) criterion combining
MCL criterion and template matching was proposed to detect masses. This method was examined on 170 images from
Digital Database for Screening Mammography (DDSM) database. The detection rate is 93.92% at 1.88 false positives
per image (FPs/I), demonstrating the effectiveness of the proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Facial expressions reflect internal emotional states of a character or in response to social communications. Though
much effort has been taken to generate realistic facial expressions, it still remains a challenging topic due to human
being’s sensitivity to subtle facial movements. In this paper, we present a method for facial animation generation, which
reflects true facial muscle movements with high fidelity. An intermediate model space is introduced to transfer captured
static AU peak frames based on FACS to the conformed target face. And then dynamic parameters derived using a
psychophysics method is integrated to generate facial animation, which is assumed to represent natural correlation of
multiple AUs. Finally, the animation sequence in the intermediate model space is mapped to the target face to produce
final animation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ink diffusion on Xuan paper is essentially a particle diffusion process, and the diffusion result is mainly
determined by paper structure and ink attributes. This paper introduces weighting fiber structure to model the Xuan paper,
which regards the paper fiber as the hindrance to ink diffusion. Based on the paper model, we present a novel simulation
method of ink diffusion using the diffusion equation with variable coefficient. The diffusion coefficient consists of
several factors including the fiber’s weights and the ink quantity in the current diffusion location. To efficiently solve the
ink diffusion equation, we also propose a new implicit difference method with high accuracy and linear time complexity.
Compared with the previous similar methods, our method is able to describe the spontaneous gray evolution and generate
a more natural diffusion boundary. The experimental results demonstrate that our approach can realistically simulate
different diffusion effects on different kinds of Xuan paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, an adaptive edge enhancement algorithm is proposed to reconstruct a super resolution image from a
single low resolution one. In order to improve the results of the high resolution reconstruction, edge statistics is learned
from the scenes using a statistical analysis of the maximum likelihood estimation to approximate edge boosting weight
that helps to significantly enhance edge information in the high frequency area. The edge sketch image will be adaptively
combined with the results of wiener filter according to the values of the local variance. The experimental results on
several test images show the success in reconstructing the super resolution both quantitatively and perceptually.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image completion solves the problem of filling missing region by using the information from the same or another
image. It is difficult to maintain a balance between visual plausibility and efficiency among the existing algorithms. In
this paper, we first propose a novel graph-based approach combining patch offsets and structure feature to get more
coherent completion result. We further put forward creatively using a few dominate offsets with an adaptive mechanism
of labels and formulate image completion to be a graph-cut optimization problem. Experiments on a wide variety of
images show our method yields better results in various challenging cases than state-of-art methods both on visual
impact and efficiency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fingertips of human hand play an important role in hand-based interaction with computers. Identification of
fingertips’ positions in hand images is vital for developing a human computer interaction system. This paper proposes a
novel method for detecting fingertips of a hand image analyzing the concept of the geometrical structural information of
fingers. The research is divided into three parts: First, hand image is segmented for detecting hand; Second, invariant
features (curvature zero-crossing points) are extracted from the boundary of the hand; Third, fingertips are detected.
Experimental results show that the proposed approach is promising.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The anti-counterfeiting watermarked images of lithographic prints must be transformed into digital images by
scanner or camera before authenticating. The obtained images usually happen to geometric distortion so as to influence
the watermark extraction. A geometric distortion correction method for the lithographic watermarked authentication
images is proposed in this paper. Hough transform is used to detect the edge lines or corner points, and then unites with
the nature of the oblique distortion or the perspective transformation model to correct the distortion. Finally,
morphological filtering is used to revise the corrected images. Experiments results demonstrate the method can correct
the geometric distortion well and the watermarking image can be extracted and clear enough to authentication.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Restoration of a degraded image from motion blurring is highly dependent on the estimation of the point spread
function, which depends on the blur orientation and the blur extent. In this paper, we present a spectral analysis of image
gradients, which has more strong periodic strips compare the spectrum of image. The experiments on simulated images
show that our algorithm is capable of accurately identifying the blurring kernel for a wider range of blur orientation and
blur extent. Furthermore, it’s been proved to be more robust to noise over available techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we design a DCT-based model for estimating spatio-temporal just noticeable distortion (JND) profiles
of color image/video. Based on a mathematical model of measuring the base detection threshold for each DCT
coefficient in the color component of color images, the masking adjustments for luminance component and chrominance
components are utilized for estimating the spatial JND profiles. The above spatial JND profiles are extended to video
signals by incorporating the proposed block-based temporal masking adjustment mainly considering local temporal
statistics in luminance component. The model is verified by designing a subjective viewing test of evaluating the visual
quality under the specified viewing condition. In the experiment, the test video is contaminated by the estimated JND
profiles in the DCT domain and is compared with the original video. The simulation results show that the JNDcontaminated
color video has nearly perceptual lossless visual quality and the model is able to estimate the JND profiles
inherent in color videos.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this work, we propose a feasible 3D video generation method to enable high quality visual perception using a monocular uncalibrated camera. Anthropometric distances between face standard landmarks are approximated based on the person's age and gender. These measurements are used in a 2-stage approach to facilitate the construction of binocular stereo images. Specifically, one view of the background is registered in initial stage of video shooting. It is followed by an automatically guided displacement of the camera toward its secondary position. At the secondary position the real-time capturing is started and the foreground (viewed person) region is extracted for each frame. After an accurate parallax estimation the extracted foreground is placed in front of the background image that was captured at the initial position. So the constructed full view of the initial position combined with the view of the secondary (current) position, form the complete binocular pairs during real-time video shooting. The subjective evaluation results present a competent depth perception quality through the proposed system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Progressive photon mapping solves the memory limitation problem of traditional photon mapping. It gives the
correct radiance with a large passes, but it converges slowly. We propose an anisotropic progressive photon mapping
method to generate high quality images with a few passes. During the rendering process, different from standard
progressive photon mapping, we store the photons on the surfaces. At the end of each pass, an anisotropic method is
employed to compute the radiance of each eye ray based on the stored photons. Before move to a new pass, the photons
in the scene are cleared. The experiments show that our method generates better results than the standard progressive
photon mapping in both numerical and visual qualities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a simple and fast algorithm to extract the skeleton of vascular structures from segmented vessel
datasets. Our algorithm is based on a step by step approach to move a small volume of interest along the vessel tree.
With the introduction of Signed Distance Function (SDF), the moving sphere along the vessel tree can easily and
automatically detect bifurcations and predict the location of next axis point. Some experiments have been carried out to
demonstrate the strengths of our proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Since “cloud computing” was put forward by Google , it quickly became the most popular concept in IT industry and
widely permeated into various areas promoted by IBM, Microsoft and other IT industry giants. In this paper the methods
of bibliometric analysis were used to investigate the global cloud computing research trend based on Web of Science
(WoS) database and the Engineering Index (EI) Compendex database. In this study, the publication, countries, institutes,
keywords of the papers was deeply studied in methods of quantitative analysis, figures and tables are used to describe the
production and the development trends of cloud computing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Visualization and reconstruction of blood vessel from standard medical datasets play an important role in many
clinical situations. This paper presents a survey on the visualization and reconstruction of vascular structures. Firstly, the
visualization techniques of vasculatures are introduced, which includes volume rendering and surface rendering of
vasculatures. Then, we focus on the reconstruction techniques of vascular structures, which can be classified into two
categories: explicit reconstruction and implicit reconstruction of vascular structures. With reconstructed vascular
geometry, it is quite easy to produce smooth visualization of vessel surfaces. In addition, finding the accurate geometric
representation of vascular structures is crucial in developing computer aided vascular surgery systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automated processing and quantification of biological images have been rapidly increasing the attention of
researchers in image processing and pattern recognition because the roles of computerized image and pattern
analyses are critical for new biological findings and drug discovery based on modern high-throughput and highcontent
image screening. This paper presents a study of the automated detection of regions of mitochondria,
which are a subcellular structure of eukaryotic cells, in microscopy images. The automated identification of
mitochondria in intracellular space that is captured by the state-of-the-art combination of focused ion beam and
scanning electron microscope imaging reported here is the first of its type. Existing methods and a proposed
algorithm for texture analysis were tested with the real intracellular images. The high correction rate of detecting
the locations of the mitochondria in a complex environment suggests the effectiveness of the proposed study.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a robust adaptive embedding scheme using a modified Spatio-Temporal noticeable distortion
(JND) model that is designed for tracing the distribution of the H.264/AVC video content and protecting them from
unauthorized redistribution. The Embedding process is performed during coding process in selected macroblocks type
Intra 4x4 within I-Frame. The method uses spread-spectrum technique in order to obtain robustness against collusion
attacks and the JND model to dynamically adjust the embedding strength and control the energy of the embedded
fingerprints so as to ensure their imperceptibility. Linear and non linear collusion attacks are performed to show the
robustness of the proposed technique against collusion attacks while maintaining visual quality unchanged.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to accurately realize the defect detection of pipeline inwall, this paper proposes a measurement system made
up of cross structured light, single CCD camera and a smart car, etc. Based on structured light measurement technology,
this paper mainly introduces the structured light measurement system, the imaging mathematical model, and the
parameters and method of camera calibration. Using these measuring principles and methods, the camera in remote
control car platform achieves continuous shooting of objects and real-time rebound processing as well as utilizing
established model to extract 3D point cloud coordinate to reconstruct pipeline defects, so it is possible to achieve 3D
automatic measuring, and verifies the correctness and feasibility of this system. It has been found that this system has
great measurement accuracy in practice.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fluorescent lamps are becoming popular as an indoor light source in many commercial sites. While high speed video
cameras can detect variations in the illumination levels from such light sources, they can be quite costly to operate.
Moreover, most popular security video cameras can only operate up to a maximum speed of 30 frames per second.
Would the camera be able to detect such variations in the scene caused by the inconsistent level of illumination from the
fluorescent lamps? This paper examines several image quality measures to detect flickering on video signals from the
video cameras at low frame rates. Future research is outlined at the end of the paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes a three-dimensional(3D) segmentation algorithm using hyper-complex edge detection operator
and applies the new algorithm to three-dimensional hepatic vessel segmentation from computed tomography (CT)
volumetric data. A 3D hyper-complex edge detection operator is constructed by combining octonion and gradient
operator. We replace every voxel of the volumetric data by one octonion which consist of its gray-level and its 6
neighborhoods' gray-level. Via this the original volumetric data is defined as octonion volumetric data. Similar to the
Sobel operator, there are three principal directions (coordinate axes) in 3D hyper-complex edge detection operator, and
each element in this operator is a octonion. The operator is circularly convoluted with octonion volumetric data to get the
value of matching response. If matched, this voxel is the edge of vessel. Experimental results show that the algorithm can
effectively segment small vascular tree branches.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is one of useful and interesting applications to discriminate the kind of the flower or recognize the name of the
flower, as example of retrieving flower database. As its contour line of the petal region of flower is useful for such
problems, it is important to extract the precise region of the petal of a flower picture. In this paper, the method which
extracts petal regions on a flower picture using HSV color information is proposed, such to discriminate the kind of the
flower. The experiments show that the proposed method can extract petal regions at the success rate of about 90%, which
is thought to be satisfied. In detail, the success rates of one-colored flower, plural-colored flower, and white flower are
about 98%, 85%, and 83%, respectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Most general purpose no-reference image quality assessment algorithms need prior knowledge about anticipated
distortions and their corresponding human opinion scores. One or more of distortion types may not be available when
creating the model. In this paper, we develop a blind/no-reference opinion unaware distortion unaware image quality
assessment algorithm based on natural scenes. The proposed approach extracts features in spatial domain for both natural
images and distorted image at two scales, where locally normalized luminance values are modeled in two forms: pointwise
for single pixels and pair-wise based log-derivative for the relationship of adjacent pixels. Then two sharpness
functions are applied whose their outputs represent the extracted features of the proposed approach. Results show that the
proposed algorithm correlates well with subjective opinion scores. They also show that the proposed algorithm
outperforms the full-reference PSNR and SSIM methods. Not only do the results compete well with the recently
developed NIQE model, but also outperform it.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a novel iterative active contour algorithm, i.e. Iterative Contextual CV Model (ICCV), and
apply it to automatic liver segmentation from 3D CT images. ICCV is a learning-based method and can be divided into
two stages. At the first stage, i.e. the training stage, given a set of abdominal CT training images and the corresponding
manual liver labels, our task is to construct a series of self-correcting classifiers by learning a mapping between
automatic segmentations (in each round) and manual reference segmentations via context features. At the second stage,
i.e. the segmentation stage, first the basic CV model is used to segment the image and subsequently Contextual CV
Model (CCV), which combines the image information and the current shape model, is iteratively performed to improve
the segmentation result. The current shape model is obtained by inputting the previous automatic segmentation result
into the corresponding self-correcting classifier. The proposed method is evaluated on the datasets of MICCAI 2007 liver
segmentation challenge. The experimental results show that we obtain more and more accurate segmentation results by
the iterative steps and satisfying results are obtained after about six iterations. Also, our method is comparable to the
state-of-the-art work on liver segmentation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years, research on human emotion estimation using thermal infrared (IR) imagery has appealed to many
researchers due to its invariance to visible illumination changes. Although infrared imagery is superior to visible imagery
in its invariance to illumination changes and appearance differences, it has difficulties in handling transparent glasses in
the thermal infrared spectrum. As a result, when using infrared imagery for the analysis of human facial information, the
regions of eyeglasses are dark and eyes’ thermal information is not given. We propose a temperature space method to
correct eyeglasses’ effect using the thermal facial information in the neighboring facial regions, and then use Principal
Component Analysis (PCA), Eigen-space Method based on class-features (EMC), and PCA-EMC method to classify
human emotions from the corrected thermal images. We collected the Kotani Thermal Facial Emotion (KTFE) database
and performed the experiments, which show the improved accuracy rate in estimating human emotions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Reversible watermarking is used to hide information in images for medical and military uses. Reversible
watermarking in images using distortion compensation proposed by Vasily et al [5] embeds each pixel twice such
that distortion caused by the first embedding is reduced or removed by the distortion introduced by the second
embedding. In their paper, because it is not applied in its most basic form, it is not clear whether improving it can
achieve better results than the existing state of the art techniques. In this paper we first provide a novel basic
distortion compensation technique that uses same prediction method as Tian’s [2] difference expansion method (DE),
in order to measure the effect of the distortion compensation more accurately. In the second part, we will analyze
what kind of improvements can be made in distortion compensation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Reversible watermarking can embed data into the cover image and extract data from stego image, where the original
cover image can be recovered perfectly after the extraction of data. Difference expansion (DE) and prediction error
expansion (PEE) are two popular reversible watermarking methods. DE has the advantage of small distortion while PEE
has the advantage of large embedding capacity and smaller prediction error compared with pixel difference. In this paper,
we proposed a novel method that combines the advantages of DE and PEE, where the difference calculated between two
pixels is combined with the edge information near this pixel pair. The proposed difference calculation can produce
smaller pixel difference compared with the original simple pixel difference calculation. Overlapping embedding is then
used to increase the embedding capacity. Our proposed method gives excellent results which is shown by several
experiments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a novel low-rank and sparse decomposition (LSD) based model for anomaly detection in
hyperspectral images. In our model, a local image region is represented as a low-rank matrix plus spares noises in the
spectral space, where the background can be explained by the low-rank matrix, and the anomalies are indicated by the
sparse noises. The detection of anomalies in local image regions is formulated as a constrained LSD problem, which can
be solved efficiently and robustly with a modified “Go Decomposition” (GoDec) method. To enhance the validity of this
model, we adapts a “simple linear iterative clustering” (SLIC) superpixel algorithm to efficiently generate homogeneous
local image regions i.e. superpixels in hyperspectral imagery, thus ensures that the background in local image regions
satisfies the condition of low-rank. Experimental results on real hyperspectral data demonstrate that, compared with
several known local detectors including RX detector, kernel RX detector, and SVDD detector, the proposed model can
comfortably achieves better performance in satisfactory computation time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes a new curve smoothing method invariant to affine transformation. Curve smoothing is one of the
important challenges in computer vision as a procedure for noise suppression in shape analysis such as Curvature Scale
Space (CSS). Currently, Gaussian filtering is widely used among a lot of smoothing methods. However Gaussian
filtering is not affine invariant. This paper proposes a new method for curve smoothing that is invariant under affine
transformation such that area of any region in the image does not change. Specifically, we introduce an affine invariant
evaluate function with a metric tensor. The original curve is smoothed by minimizing the evaluation function. We
mathematically prove that this method is affine invariant. Further, experimental results show that the proposed method is
almost never affected by affine transformation different from usual Gaussian filtering. In the proposed method,
processing results are expected to be not affected much by variation of the viewpoint.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We develop an augmented reality (AR) environment with hidden-marker via touch interface using Kinect device, and
then also set up a touch painting game with the AR environment. This environment is similar to that of the touch screen
interface which allows user to paint picture on a tabletop with his fingers, and it is designed with depth image
information from Kinect device setting up above a tabletop. We incorporate support vector machine (SVM) to classify
painted pictures which correspond to the inner data and call out its AR into the tabletop in color images information from
Kinect device. Because users can utilize this similar touch interface to control AR, we achieve a marker-less AR and
interactive environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a robust color image watermarking algorithm, which embeds a grayscale image into a color
image using the higher order singular value decomposition (HOSVD). We look the color image in the RGB color space
upon as a tensor rather than three independent channels. The color image is partitioned into non-overlapped patches (subtensors),
and their HOSVDs are computed. Moreover, a subtle preprocessing step, block Arnold transform, is designed to
improve the robustness to cropping attack. Experimental results show that the proposed algorithm makes the
watermarking invisible effectively and is robust against wide variety of non-geometric and geometric attacks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Short-term forecasting of cloud distribution within a sequence of all-sky images is an important issue in
meteorological area. In this work, a cloud image forecasting system is designed, which includes three steps---cloud
detection, cloud matching and motion estimation. We treat cloud detection as a classification problem based on Linear
Discriminant Analysis. During the matching, a set of Speed Up Robust Features (SURF) are extracted to represent the
cloud, then clouds are matched by computing correspondences between SURF features. Finally, affine transform is
applied to estimate the motion of cloud. This local features based method is capable of predicting the rotation and scaling
of cloud, while the traditional method is only limited to translational motion. Objective evaluation results show higher
accuracy of the proposed method compared with some other algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a Belief Propagation (BP) stereo matching algorithm using ground control points. The proposed
algorithm combines local and global stereo methods, which first utilizes local stereo method to obtain an initial disparity
map, then the ground control points are selected from the initial disparity and used on belief propagation algorithm for
global stereo matching. Since using ground control points, the proposed algorithm improves BP algorithm in
convergence speed. Moreover, this paper proposes a color constraint voting method to optimize the disparity in postprocessing. Experimental results show that the proposed algorithm shares low computational complexity but high
matching accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a new algorithm for simplifying line drawing sketches. First, we segment the strokes at the
points of large curvature if desired. Then, we perform a low-pass filter and use the result to assign a weight to every
stroke. The strokes are moved to the position of the higher weight. After that, we find the stroke pairs and combine them
to reduce the total number of the strokes, resulting in a cleaner line drawing art. This system also cuts down the
disordered and confusing small strokes and combines them to form long strokes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper studies a three-dimensional non-contact attitude and deformation measuring system of the wind tunnel
model. By projecting the single laser speckle on the measured three-dimensional space, it receives a random optical
encoding. In addition, correlation search matching for collected image is conducted by matlab, the measurement data
results of the existing PIV equipment are compared, and the spin binocular measurement system is employed to obtain
the three-dimensional coordinate of the model. As a result, the feasibility of the system is verified. the author grasped the
principle of 3D non-contact coding and recognition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Appearance modeling is an important and yet challenging issue for online visual tracking due to the accumulation of
errors which is prone to potential drifting during the self-updating with newly obtained results. In this paper, we propose
a novel online tracking algorithm using spatio-temporal cue integration. Specifically, the object is represented as a set of
local patches with respect to the spatial cue. In terms of the temporal cue, we keep the appearance models at different
time and do appearance updating alternately. Taking full advantage of both historical and current information of the
tracked object, the drift problem is alleviated. We also develop an effective cue quality measurement that combines
similarity and motion information. Both qualitative and quantitative evaluations on challenging video sequences
demonstrate that the proposed algorithm performs comparable against the state-of-the-art methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Demosaicing of the color filter array is one of the most important parts of the image processing pipeline for single
sensor digital cameras. In recent years, one of the most successful algorithms is the multiscale gradients (MSG)
algorithm. In this paper, several modifications were made to the MSG algorithm such that the computational complexity
is significantly reduced while maintaining image quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Embedded block coding with optimized truncation (EBCOT) is a key algorithm in JPEG 2000 image compression
system. Recently, the bit-plane coder architectures are capable of producing symbols at a higher rate than the capability
of the existing MQ arithmetic coders. To solve this problem, a design of a multiple-symbol processor for statistical MQ
coder architecture on FPGA is proposed. The proposed architecture takes advantage of simplicity of single-symbol
architecture while integrates several techniques in order to increase the coding rate (more than one symbol per clock),
reduce critical path, thus accelerate the coding speed. The repeated symbol statistics has been analyzed prior to the
proposed architecture using lookahead technique. This allows the proposed architecture to support encoding rate of
maximum 8 symbols per clock cycle without stalls and without excessively increasing the hardware cost. This helps to
accelerate encoding process, which leads to greatly increase throughput. From the experiments, for lossy wavelet
transform, the proposed architecture offers high throughput of at least 233.07 MCxD/S with effectively reducing the
number of clock cycles more than 35.51%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In image processing and computational photography, automatic image enhancement is one of the long-range
objectives. Recently the automatic image enhancement methods not only take account of the globe semantics, like
correct color hue and brightness imbalances, but also the local content of the image, such as human face and sky of
landscape. In this paper we describe a new scheme for automatic image enhancement that considers both global
semantics and local content of image. Our automatic image enhancement method employs the multi-scale edge-aware
image decomposition approach to detect the underexposure regions and enhance the detail of the salient content. The
experiment results demonstrate the effectiveness of our approach compared to existing automatic enhancement methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Signature verification holds a significant place in today's world as most of the bank transactions, stock trading etc.
are validated via signatures. Signatures are considered as one of the most effective biometric identity but unfortunately
signature forgery attempts are quite rampant. To prevent this, a robust signature verification mechanism is essential. In
this paper, a new method has been proposed which uses Local Binary Pattern and geometrical features. A new geometric
property has been devised i.e. Octave Pattern. Performance is analyzed by comparing random, semi-skilled and skilled
forgeries with the genuine signature.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In a previous study, we examined color spaces to read resistor lines of 11 different colors (black, brown, red, orange,
yellow, green, blue, purple, gray, white, and gold). However, the color classification experiment was carried out only
under a certain illumination. In order effectively to classify real resistor color lines, the color classification under various
illumination conditions must be considered. In this paper, we examine 10 color features (RGB, XYZ, YCbCr, YIQ, HSI,
HSV, HLS, L*u*v*, L*a*b*, and I1I2I3) under the various illumination. The experimental results show the effectiveness
of the L*u*v* color feature. Furthermore, in a very small training sample size situation, the classification performance of
the u*v* feature vector except for an intensity element L* outperforms that of L*u*v*.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Chaotic theory has been used in cryptography application for generating a sequence of data that is close to
pseudorandom number based on an adjusted initial condition and a parameter. However, data recovery becomes a crucial
problem due to the precision of the parameters. This difficulty leads to limited usage of Chaotic-based cryptography
especially for error sensitive applications such as voice cryptography. In order to enhance the encryption security and
overcome this limitation, an Adaptive Pixel-Selection using Chaotic Map Lattices (APCML) is proposed. In APCML,
the encryption sequence has been adaptively selected based on chaos generator. Moreover, the chaotic transformation
and normalization boundary have been revised to alleviate the rounding error and inappropriate normalization boundary
problems. In the experiments, the measurement indices of originality preservation, visual inspection, and statistical
analysis are used to evaluate the performance of the proposed APCML compared to that of the original CML.
Consequently, the APCML algorithm offers greater performance with full recovery of the original message.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a robust and fast line segment detector, which achieves accurate results with a controlled
number of false detections and requires no parameter tuning. It consists of three steps: first, we propose a novel edge
point chaining method to extract Canny edge segments (i.e., contiguous chains of Canny edge points) from the input
image; second, we propose a top-down scheme based on smaller eigenvalue analysis to extract line segments within each
obtained edge segment; third, we employ Desolneux et al.’s method to reject false detections. Experiments demonstrate
that it is very efficient and more robust than two state of the art methods—LSD and EDLines.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes a novel face super-resolution reconstruction (hallucination) technique for YCbCr color space.
The underlying idea is to learn with an error regression model and multi-linear principal component analysis (MPCA).
From hallucination framework, many color face images are explained in YCbCr space. To reduce the time complexity of
color face hallucination, we can be naturally described the color face imaged as tensors or multi-linear arrays. In addition,
the error regression analysis is used to find the error estimation which can be obtained from the existing LR in tensor
space. In learning process is from the mistakes in reconstruct face images of the training dataset by MPCA, then finding
the relationship between input and error by regression analysis. In hallucinating process uses normal method by backprojection
of MPCA, after that the result is corrected with the error estimation. In this contribution we show that our
hallucination technique can be suitable for color face images both in RGB and YCbCr space. By using the MPCA
subspace with error regression model, we can generate photorealistic color face images. Our approach is demonstrated
by extensive experiments with high-quality hallucinated color faces. Comparison with existing algorithms shows the
effectiveness of the proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image matching plays an important role in a variety of applications. For matching images, the features are needed to
describe the local content of images. In this paper, we proposed novel features (Vector Features) for image matching and
image registration. The vector features are based on the intensity differences of the pixels around the interesting points
and characterize the spatial distribution of gray values in an effective and efficient way. In the experiments, the
invariance of vector features to image rotation is verified. The vector features perform well on the images of different
focal lengths and the multi-view images. The experiment on image registration demonstrates a good result.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the rapid development of information techniques, data mining approaches have become one of the most
important tools to discover the in-deep associations of tuples in large-scale database. Hence how to protect the private
information is quite a huge challenge, especially during the data mining procedure. In this paper, a new method is
proposed for privacy protection which is based on fuzzy theory. The traditional fuzzy approach in this area will apply
fuzzification to the data without considering its readability. A new style of obscured data expression is introduced to
provide more details of the subsets without reducing the readability. Also we adopt a balance approach between the
privacy level and utility when to achieve the suitable subgroups. An experiment is provided to show that this approach is
suitable for the classification without a lower accuracy. In the future, this approach can be adapted to the data stream as
the low computation complexity of the fuzzy function with a suitable modification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Mammogram is currently the best way for early detection of breast cancer. Mass is a typical sign of breast cancer,
and the classification of masses as malignant or benign may assist radiologists in reducing the biopsy rate without
increasing false negatives. Typically, different geometry and texture features are extracted and utilized to train a
classifier to classify a mass. However, not each feature is equally important for a classifier, and some features may
indeed decrease the performance of a classifier. In this paper, we investigated the usage of semi-supervised feature
selection method for classification. After a mass is extracted from a ROI (region of interest) with level set method.
Morphological and texture features are extracted from the segmented regions and surrounding regions. SSLFE (Semi-
Supervised Local Feature Extraction, proposed in our previous work) is utilized to select important features for KNN
classifier. Mammography images from DDSM were used for experiment. The experimental result shows that by
incorporating information embedded in unlabeled data, SSLFE can improve the performance compared to the method
without feature selection and traditional Relief method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Applying the structure element (SE)-based morphology to detect the edges of the image corrupted by noise may obtain the irregular shape, and hence the precision of the object identification may decrease. Lately, the multi-SE morphology was introduced to detect the edge of gray-scale image, which yields the more integral and continual final result than that of the single-structure element. For color images, the robust color morphology gradient (RCMG) edge-detection, extended from CMG, was proposed by using a square SE along with the removing outliner technique to solve the noisesensitivity problem. In this paper, in order to solve such a noise problem on color images, we propose an efficient color edge-detection algorithm, called the ECMSEM (Efficient Color Multi-SE Morphology) by combining the advantage of the multi-SE morphology and the removing (noise) outliner. In our ECMSEM approach, the color multi-SE morphology with using a synthetic weighted method is introduced to detect only edge (except noise) and using multi-scale SEs to increase the precision of each SE-direction, and hence improve the edge-detection results. In performance evaluation, the edge-detection results (of synthetic color tested images) showed that our ECMSEM yielded the higher FOM than those of the RCMG approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we proposed a two-step algorithm based on the combination of the exemplar-based algorithm and the
illumination model to deal with specular images, especially those contain saturated pixels in the highlight areas. First the
proposed modified exemplar-based algorithm is employed to process the unsaturated specular pixels under the
supervision of illumination model. Then we inpaint the rest regions in which the pixels are saturated with original
exemplar-based algorithm to obtain the final result. Experimental results demonstrate that the proposed algorithm
performs better on the images with saturated pixels in the highlight areas compared with classical highlight removal and
image inpainting algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A real-time orientation feature descriptor for portable devices is introduced. The descriptor requires very low
computational resources and has 16 dimensions shorter than all existing methods. The patch of a candidate feature is
firstly segmented into polar arranged sub-regions, which enables us to achieve rotation invariance rapidly. Furthermore,
the principal orientation is used to describe each sub-region. The computations can be considerably accelerated by using
integral image. The descriptor is used for object tracking and achieves 25 fps frame rate on mobile phone. Experimental
results demonstrate that the proposed method offers sufficient matching performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
According to the requirement of RoboCup Standard Platform League (NAO) Rule Book, the author did some
experiments to finish the League’s tasks based on the humanoid robot NAO. The image captured by a moving camera
will have a distortion which is affected by the so called “rolling shutter”. The rolling shutter means that not all scanlines
are exposed over the same time interval. This paper presents the model of rolling shutter of cameras and principle of
capturing images. Then corrections to image distortion found on the NAO robots is described. At last the cases of
reversing the effects of a rolling shutter on the images taken by the robot’s moving camera are presented. This approach
improves the effectiveness of shape recognition and the accuracy of Tilt rate of field line.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Liaohe river basin mountain torrent disaster serious, this paper tells a system of flood loss assessment which would be
helpful for the decrease of the geologic disastrous loss. Firstly, it mainly analyzes four uncertainties aspects of flood loss:
precision of prediction and simulation, accounting standard of asset, asset vulnerability and flood prevention ability.
Secondly, EasyDHM has been selected for forecast flood, the simulation forecast time is 6h before real flood peak coming;
flood inundation model is selected for flood submerged level information extracted and flood submerged map. In the end,
the model of flood Lose calculation is used to calculation the loss by the extracted information from stacking social
economic data and water data. The feasibility of flood loss assessment system has been proved by flood simulation in 1998,
2003 and 2005. The unified assessment criteria make the assessment result difference, but this system of flood loss
assessment has some value in system integration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a supervised smoke detection method that uses local and global features. This framework integrates and extends notions of many previous works to generate a new comprehensive method. First chrominance detection is used to screen areas that are suspected to be smoke. For these areas, local features are then extracted. The features are among homogeneity of GLCM and energy of wavelet. Then, global feature of motion of the smoke-color areas are extracted using a space-time analysis scheme. Finally these features are used to train an artificial intelligent. Here we use neural network, experiment compares importance of each feature. Hence, we can really know which features among those used by many previous works are really useful. The proposed method outperforms many of the current methods in the sense of correctness, and it does so in a reasonable computation time. It even has less limitation than conventional smoke sensors when used in open space. Best method for the experimental results is to use all the mentioned features as expected, to insure which is the best experiment result can be achieved. The achieved with high accuracy of result expected output is high value of true positive and low value of false positive. And show that our algorithm has good robustness for smoke detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this study a Computer-Aided Microscopy (CAM) system is proposed for investigating the importance of the
histological criteria involved in diagnosing of cancers in microscopy in order to suggest the more informative features
for discriminating low from high-grade brain tumours. Four families of criteria have been examined, involving the greylevel
variations (i.e. texture), the morphology (i.e. roundness), the architecture (i.e. cellularity) and the overall tumour
qualities (expert’s ordinal scale). The proposed CAM system was constructed using a modified Seeded Region Growing
algorithm for image segmentation, and the Probabilistic Neural Network classifier for image classification. The
implementation was designed on a commercial Graphics Processing Unit card using parallel programming. The system’s
performance using textural, morphological, architectural and ordinal information was 90.8%, 87.0%, 81.2% and 88.9%
respectively. Results indicate that nuclei texture is the most important family of features regarding the degree of
malignancy, and, thus, may guide more accurate predictions for discriminating low from high grade gliomas.
Considering that nuclei texture is almost impractical to be encoded by visual observation, the need to incorporate
computer-aided diagnostic tools as second opinion in daily clinical practice of diagnosing rare brain tumours may be
justified.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Investigating an efficient method for plant propagation can help not only prevent extinction of plants but also
facilitate the development of botanical industries. In this paper, we propose to use image processing techniques to
determine the cutting-line for the propagation of two kinds of plants, i.e. Melaleuca alternifolia Cheel and Cinnamomum
kanehirai Hay, which have quite different characteristics in terms of shape, structure, and propagation way (e.g.
propagation by seeding and rooting, respectively). The proposed cutting line determination methods can be further
applied to develop an automatic control system to reduce labor cost and increase the effectiveness of plant propagation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a novel image retrieval algorithm using local spatial binary patterns (LSBP) for contentbased
image retrieval. The traditional local binary pattern (LBP) encodes the relationship between the referenced pixel
and its surrounding neighbors by calculating gray-level difference, but LBP lacks the spatial distribution information of
texture direction. The proposed method encodes spatial relationship of the referenced pixel and its neighbors, based on
the gray-level variation patterns of the horizontal, vertical and oblique directions. Additionally, variation between center
pixel and its surrounding neighbors is calculated to reflect the magnitude information of the whole image. We compare
our method with LBP, uniform LBP (ULBP), completed LBP (CLBP), local ternary pattern (LTP) and local tetra
patterns (LTrP) based on three benchmark image databases including, Brodatz texture database(DB1), Corel
database(DB2), and MIT VisTex database(DB3). Experiment analysis shows that the proposed method improves the
retrieval results from 70.49%/41.30% to 73.26%/46.26% in terms of average precision/average recall on database DB2,
from 79.02% to 85.92% and 82.14% to 90.88% in terms of average precision on databases DB1 and DB3, respectively,
as compared with the traditional LBP.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Pleural thickenings can be found in asbestos exposed patient’s lung. Non-invasive diagnosis including CT imaging
can detect aggressive malignant pleural mesothelioma in its early stage. In order to create a quantitative documentation
of automatic detected pleural thickenings over time, the differences in volume and thickness of the detected thickenings
have to be calculated. Physicians usually estimate the change of each thickening via visual comparison which provides
neither quantitative nor qualitative measures. In this work, automatic spatiotemporal matching techniques of the detected
pleural thickenings at two points of time based on the semi-automatic registration have been developed, implemented,
and tested so that the same thickening can be compared fully automatically. As result, the application of the mapping
technique using the principal components analysis turns out to be advantageous than the feature-based mapping using
centroid and mean Hounsfield Units of each thickening, since the resulting sensitivity was improved to 98.46% from
42.19%, while the accuracy of feature-based mapping is only slightly higher (84.38% to 76.19%).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper an approach based on particle filtering to automatic extract power lines from aerial images is presented.
We integrate the similarity of grey value of power lines into particle filtering to track the points on power lines, and use
those extracted points to fit the power line as a parabola. Moreover, a fully automatic initialization strategy is used.
Experimental results show that the proposed approach is a promising and fully automatic method for extracting power
lines from very complex background. This algorithm will play an important role in the exact 3D-reconstructions of
power lines that can help the power grid company to ensure the safety of the power lines.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Edge-preserving smoothing is crucial for image decomposition to extract the base layer. However, current methods
fail to smooth high-contrast details or preserve thin edges due to their single criterion for distinguishing edges and details.
In this paper, we present a hybrid definition for salient edges using two properties: intensity amplitude and oscillations
density. Based on this definition, we propose an edge-preserving image smoothing algorithm. Firstly local extrema of the
input image are located. Then these extrema points are classified into edge or detail points by the two properties. Thirdly,
max and min envelops are obtained by an optimizing process with edge points as constrains. Lastly, the smoothing result
is obtained by an averaging operation. Experimental results show that the proposed method can preserve salient step
edges while smoothing high-contrast details and is useful in many applications such as image enhancement and
hatch-to-tone mapping.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present a fully automatic facial expression recognition system using support vector machines, with
geometric features extracted from the tracking of facial landmarks. Facial landmark initialization and tracking is
performed by using an elastic bunch graph matching algorithm. The facial expression recognition is performed based on
the features extracted from the tracking of not only individual landmarks, but also pair of landmarks. The recognition
accuracy on the Extended Kohn-Kanade (CK+) database shows that our proposed set of features produces better results,
because it utilizes time-varying graph information, as well as the motion of individual facial landmarks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
AdaBoost is a machine learning technique which integrates many weak classifiers into one strong classifier to
enhance its classification performance. Gentle AdaBoost is a variant of AdaBoost which introduces Newton steps to the
boosting process. It is proved that the overall performance considering both the training error and generalization error of
Gentle AdaBoost is better than other AdaBoost variants on low-noise data. However, it suffers from overfitting problem
when the training data include high noise. To solve this problem, we propose a new approach to limit the weight
distortion according to a stretched distribution of the whole sample weights. Experimental results have shown that our
algorithm obtains a better generalization error on both standard and noise-input datasets. Moreover, our method does not
increase the calculation time compared with Gentle AdaBoost.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The advances of computer games have shown their potentials for developing edutainment content and services.
Current cultural heritages often make use of games in order to complement existing presentations and to create a
memorable exhibition. It offers opportunities to reorganize and conceptualize historical, cultural and technological
information about the exhibits. To demonstrate the benefits of serious games in terms of facilitating the learning
activities in a constructive and meaningful way, we designed a video game about the Heerlen bathhouse heritage. This
paper explains the design considerations of this Roman bathhouse game, with a particular focus on the link between
game play and learning.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to improve the accuracy of geometrical defect detection, this paper presented a method based on HU
moment invariants of skeleton image. This method have four steps: first of all, grayscale images of non-silicon MEMS
parts are collected and converted into binary images, secondly, skeletons of binary images are extracted using medialaxis-
transform method, and then HU moment invariants of skeleton images are calculated, finally, differences of HU
moment invariants between measured parts and qualified parts are obtained to determine whether there are geometrical
defects. To demonstrate the availability of this method, experiments were carried out between skeleton images and
grayscale images, and results show that: when defects of non-silicon MEMS part are the same, HU moment invariants of
skeleton images are more sensitive than that of grayscale images, and detection accuracy is higher. Therefore, this
method can more accurately determine whether non-silicon MEMS parts qualified or not, and can be applied to nonsilicon
MEMS part detection system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Normal estimation is an essential step in point cloud based geometric processing, such as high quality point based
rendering and surface reconstruction. In this paper, we present a clustering based method for normal estimation which
preserves sharp features. For a piecewise smooth point cloud, the k-nearest neighbors of one point lie on a union of
multiple subspaces. Given the PCA normals as input, we perform a subspace clustering algorithm to segment these
subspaces. Normals are estimated by the points lying in the same subspace as the center point. In contrast to the previous
method, we exploit the low-rankness of the input data, by seeking the lowest rank representation among all the
candidates that can represent one normal as linear combinations of the others. Integration of Low-Rank Representation
(LRR) makes our method robust to noise. Moreover, our method can simultaneously produce the estimated normals and
the local structures which are especially useful for denoise and segmentation applications. The experimental results show
that our approach successfully recovers sharp features and generates more reliable results compared with the state-of-theart.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Today, digital multimedia messages have drawn more and more attention due to the great achievement of computer
and network techniques. Nevertheless, text is still the most popular media for people to communicate with others. Many
fonts have been developed so that product designers can choose unique fonts to demonstrate their idea gracefully. It is
commonly believed that handwritings can reflect one’s personality, emotion, feeling, education level, and so on. This is
especially true in Chinese calligraphy. However, it is not easy for ordinary users to customize a font of their personal
handwritings. In this study, we performed a process reengineering in font generation. We present a new method to create
font in a batch mode. Rather than to create glyphs of characters one by one according to their codepoints, people create
glyphs incrementally in an on-demand manner. A Java Implementation is developed to read a document image of user
handwritten Chinese characters, and make a vector font of these handwritten Chinese characters. Preliminary experiment
result shows that the proposed method can help ordinary users create their personal handwritten fonts easily and quickly.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a novel method of conformal parameterization for triangular meshes is presented. Firstly, based on
geodesic on a mesh, an algorithm constructing local barycentric coordinates is proposed. Then, these local coordinates
are merged via a linear system to form a global conformal parameterization of the mesh. The conformal mesh
parameterization method here can be viewed as a development of the shape-preserving method proposed by M. S.
Floater. It avoids error of locally approximating the so-called geodesic polar mapping and hence giving better results.
Experimental results are given to illustrate the effectiveness of proposed methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Considering that existing line segment detection algorithms may detect a long line segment as several short
fragmented segments, a novel line segment linking algorithm is proposed in this paper to improve the performance of
line segment detection. Since the gradient orientations of points on the Right Linking Segments (RLSs) have better
consistency than those on Wrong Linking Segments (WLSs), a feature descriptor is designed for each candidate linking
segment based on the gradient orientation information that can effectively distinguish RLSs from WLSs. Experiment
results of the testing images show that the proposed method can greatly improve the original line segment detection
results by connecting most fragmented line segments more accurately.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Strand structures such as the tails or feelers are common to man-made or natural shapes. The knowledge of strand
structures which a shape possesses can be exploited in its matching, recognition, retrieval etc. Although a variety of
methods in shape decomposition have been presented, there is still a need for a robust and versatile method to detect the
strand structures, especially when the shapes have large deformation or noise. Based on the visibility of points, we
design a shape descriptor and propose an effective method to detect the strand structures possessing in 2D shapes. An
intuitive idea is that the points in strand structures can only see small number of points from their reference points inside
the shape. Meanwhile, the visibility of a point is more robust than its convex-concave features. Extensive experiments
have been done on shapes with various kinds of deformation and large noise, demonstrating the robustness and
effectiveness of our strand structures detection method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The brush plays an important role in creating Chinese calligraphy. We regard a single bristle of a writing brush as an
elastic rod and the brush tuft absorbing ink as an elastic cone, which naturally deforms according to the force exerted on
it when painting on a paper, and the brush footprint is formed by the intersection region between the deformed tuft and
the paper plane. To efficiently generate brush strokes, this paper introduces interpolation and texture mapping approach
between two adjacent footprints, and automatically applies bristle-splitting texture to the stroke after long-time painting.
Experimental results demonstrate that our method is effective and reliable. Users can create realistic calligraphy in real
time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper offers an algebraic explanation for the phenomenon of a new and prosperous branch of evolutionary
metaheuristics – “skeletal algorithms”. We show how this explanation gives rise to algorithms for recognition of
algebraic theories and present sample applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a novel algorithm for selecting random features via compressed sensing to improve the
performance of Normalized Cuts in image segmentation. Normalized Cuts is a clustering algorithm that has been widely
applied to segmenting images, using features such as brightness, intervening contours and Gabor filter responses. Some
drawbacks of Normalized Cuts are that computation times and memory usage can be excessive, and the obtained
segmentations are often poor. This paper addresses the need to improve the processing time of Normalized Cuts while
improving the segmentations. A significant proportion of the time in calculating Normalized Cuts is spent computing an
affinity matrix. A new algorithm has been developed that selects random features using compressed sensing techniques
to reduce the computation needed for the affinity matrix. The new algorithm, when compared to the standard
implementation of Normalized Cuts for segmenting images from the BSDS500, produces better segmentations in
significantly less time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The booming electronic books (e-books), as an extension to the paper book, are popular with readers. Recently, many
efforts are put into the realistic page-turning simulation o f e-book to improve its reading experience. This paper presents
a new 3D page-turning simulation approach, which employs piecewise time-dependent cylindrical surfaces to describe
the turning page and constructs smooth transition method between time-dependent cylinders. The page-turning animation
is produced by sequentially mapping the turning page into the cylinders with different radii and positions. Compared to
the previous approaches, our method is able to imitate various effects efficiently and obtains more natural animation of
turning page.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Distance measure of point to segment is one of the determinants which affect the efficiency of DP (Douglas-Peucker)
polyline simplification algorithm. Zone-divided distance measure instead of only perpendicular distance is proposed by
Dan Sunday [1] to improve the deficiency of the original DP algorithm. A new efficiency zone-divided distance measure
method is proposed in this paper. Firstly, a rotating coordinate is established based on the two endpoints of curve.
Secondly, the new coordinate value in the rotating coordinate is computed for each point. Finally, the new coordinate
values are used to divide points into three zones and to calculate distance, Manhattan distance is adopted in zone I and
III, perpendicular distance in zone II. Compared with Dan Sunday’s method, the proposed method can take full
advantage of the computation result of previous point. The calculation amount basically keeps for points in zone I and
III, and the calculation amount reduces significantly for points in zone II which own highest proportion. Experimental
results show that the proposed distance measure method can improve the efficiency of original DP algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Flood protection of Fuhe river basin has been payed high attention after Changkai-levee crevasse in 2010. This
paper constructions a model of flood disaster lose calculation considering flood disaster and social economic
developing based on GIS. Firstly social economic indexes have been selected according to characteristics of the
urban and the rural. Secondly a mathematical model of flood routing using Finite Volume Method has been made
in spacial information grids, the data of inundated depth and flood duration can be extracted from the grids. In the
end ,wo calculate the loss by flood disaster losses calculation process model. This paper solves the stacking
problem of flood characteristic and administrative boundaries effectively, which makes a development on accuracy
of flood disaster assessment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a novel edge-guided filtering scheme for decomposition-based tone mapping, whose superiority
is to prevent two major defects in filter-driven multi-scale decomposition: halo artifact and over-smoothing distortion.
First, we calculate an edge-preserving smoothing by gradient domain reconstruction with given edges. Then we apply
this output in high dynamic range tone mapping to address aforementioned problems. At last, some experimental results
are presented to demonstrate the effectiveness of our method in producing high-quality low dynamic range outputs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image texture analysis plays an important role in object detection and recognition in image processing. The texture
analysis can be used for early detection of breast cancer by classifying the mammogram images into normal and
abnormal classes. This study investigates breast cancer detection using texture features obtained from the grey level cooccurrence
matrices (GLCM) of curvelet sub-band levels combined with texture feature obtained from the image itself.
The GLCM were constructed for each sub-band of three curvelet decomposition levels. The obtained feature vector
presented to the classifier to differentiate between normal and abnormal tissues. The proposed method is applied over
305 region of interest (ROI) cropped from MIAS dataset. The simple logistic classifier achieved 86.66% classification
accuracy rate with sensitivity 76.53% and specificity 91.3%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Accurate assessment on the quality of color images is an important step to many image processing systems that
convey visual information of the reproduced images. An accurate objective image quality assessment (IQA) method is
expected to give the assessment result highly agreeing with the subjective assessment. To assess the quality of color
images, many approaches simply apply the metric for assessing the quality of gray scale images to each of three color
channels of the color image, neglecting the correlation among three color channels. In this paper, a metric for assessing
color images’ quality is proposed, in which the model of variable just-noticeable color difference (VJNCD) is employed
to estimate the visibility thresholds of distortion inherent in each color pixel. With the estimated visibility thresholds of
distortion, the proposed metric measures the average perceptible distortion in terms of the quantized distortion according
to the perceptual error map similar to that defined by National Bureau of Standards (NBS) for converting the color
difference enumerated by CIEDE2000 to the objective score of perceptual quality assessment. The perceptual error map
in this case is designed for each pixel according to the visibility threshold estimated by the VJNCD model. The
performance of the proposed metric is verified by assessing the test images in the LIVE database, and is compared with
those of many well-know IQA metrics. Experimental results indicate that the proposed metric is an effective IQA
method that can accurately predict the image quality of color images in terms of the correlation between objective scores
and subjective evaluation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.