This PDF file contains the front matter associated with SPIE Proceedings Volume 8349, including the Title Page, Copyright information, Table of Contents, Introduction, and Conference Committee listing.
In this paper, ISOMAP algorithm is applied into anomaly detection on the basis of feature analysis in hyperspectral
images. Then an improved ISOMAP algorithm is developed against the limitation existed in ISOMAP algorithm. The
improved ISOMAP algorithm selects neighborhood according to spectral angel, thus avoiding the instability of the
neighborhood in the high-dimension spectral space. Experimental results show the effectiveness of the algorithm in
improving the detection performance.
Domain ontology is a descriptive representation of any particular domain which in detail describes the concepts in a
domain, the relationships among those concepts and organizes them in a hierarchal manner. It is also defined as a
structure of knowledge, used as a means of knowledge sharing to the community. An Important aspect of using
ontologies is to make information retrieval more accurate and efficient.
Thousands of domain ontologies from all around the world are available online on ontology repositories. Ontology
repositories like SWOOGLE currently have over 1000 ontologies covering a wide range of domains. It was found that up
to date there was no ontology available covering the domain of "Sufism". This unavailability of "Sufism" domain
ontology became a motivation factor for this research. This research came up with a working "Sufism" domain ontology
as well a framework, design of the proposed framework focuses on the resolution to problems which were experienced
while creating the "Sufism" ontology. The development and working of the "Sufism" domain ontology are covered in
detail in this research.
The word "Sufism" is a term which refers to Islamic mysticism. One of the reasons to choose "Sufism" for ontology
creation is its global curiosity. This research has also managed to create some individuals which inherit the concepts
from the "Sufism" ontology. The creation of individuals helps to demonstrate the efficient and precise retrieval of data
from the "Sufism" domain ontology. The experiment of creating the "Sufism" domain ontology was carried out on a tool
called Protégé. Protégé is a tool which is used for ontology creation, editing and it is open source.
There are some advantages to describe knowledge with discernibility matrix ,As for the discernibility matrix of
decision tables, this paper finds that there exist that some redundant information of discernibility matrix. so this paper
defines a new discernibility matrix ,which is named Simple Discernibility matrix .it has no redundant information and
could reduce the number of comparisons of individuals ,and improve the efficiency.
Inter-class testing is the testing of classes for composing an object-oriented system or subsystem during integration.
MM Path is defined as an interleaved sequence of method executions linked by messages. It represents the interactions
between methods in object-oriented software well, hence fits for object-oriented integration testing. However, the current
MM Path generation methods only support intra-class testing. In this paper, a call-graph-based approach is proposed to
promote MM Path automatic generation from intra-class to inter-class level. The approach is evaluated by controlled
experiments on 12 Java benchmark programs with two typical call graph construction algorithms, Class Hierarchy
Analysis and Anderson's Points-to Analysis. Then, the impact of the two algorithms on inter-class MM path generation
efficiency is studied. The result shows that our approach is practicable and Anderson's Points-to Analysis outperforms
Class Hierarchy Analysis for inter-class MM Path generation.
Moral education website offers an available solution to low transmission speed and small influence areas of
traditional moral education. The aim of this paper is to illustrate the design of one moral education website and the
advantages of using it to help moral teaching. The reason for moral education website was discussed at the beginning of
this paper. Development tools were introduced. The system design was illustrated with module design and database
design. How to access data in SQL Server database are discussed in details. Finally a conclusion was made based on the
discussions in this paper.
Web services have gained an increasing popularity, however, also shown challenge in appropriate web service
selection. Because of the wide variety of web services might be offered to perform one specific task, it is essential that
users are supported in the selection of appropriate web services. Keyword based web service discovery, one of the most
practical approaches, ignores the reaction of user context which is considered to be valuable. In this paper, we propose a
user context model which enables user context to participate in the process of web service discovery. Our main
contribution is a user context utilization method which improves the accuracy of keyword based approaches.
Experiments and results are provided to evaluate the proposed method.
The article put forward a method which had been used for video image acquisition and processing, and a system
based on Java media framework (JMF) had been implemented by it. The method could be achieved not only by B/S
mode but also by C/S mode taking advantage of the predominance of the Java language. Some key issues such as
locating video data source, playing video, video image acquisition and processing and so on had been expatiated in
detail. The operation results of the system show that this method is fully compatible with common video capture device.
At the same time the system possesses many excellences as lower cost, more powerful, easier to develop and cross-platform
etc. Finally the application prospect of the method which is based on java and JMF is pointed out.
Web image search results usually contain duplicate copies. This paper considers the problem of detecting and
clustering duplicate images contained in web image search results. Detecting and clustering the duplicate images together
facilitates users' viewing. A novel method is presented in this paper to detect and cluster duplicate images by measuring
similarity between their topics. More specifically, images are viewed as documents consisting of visual words formed by
vector quantizing the affine invariant visual features. Then a statistical model widely used in text domain, the
PLSA(Probabilistic Latent Semantic Analysis) model, is utilized to map images into a probabilistic latent semantic space.
Because the main content remains unchanged despite small digital alteration, duplicate images will be close to each other
in the derived semantic space. Based on this, a simple clustering process can successfully detect duplicate images and
cluster them together. Comparing to those methods based on comparison between hash value of visual words, this method
is more robust to the visual feature level alteration posed on the images. Experiments demonstrates the effectiveness of this
In this paper, we develop a new image denoising method based on block-matching and transform-domain filtering.
The developed method is derived from the current state-of-the-art denoising method (BM3D). We separate the 3D
transform in the original method to two steps 1D transform, to further enhance the sparsity for signals whose elements
are highly similar and to weaken the sparsity for those signals whose elements are dissimilar. Because the 1D filtering is
on highly similar elements and the 2D filtering on image blocks are all removed, the image details can be better reserved
and fewer artifacts are introduced than original method. Experimental results demonstrate that the developed method is
competitive and better than some of the current state-of-the-art denoising methods in terms of peak signal-to-noise ratio,
structural similarity, and subjective visual quality.
Using SAR to monitor oil spill is a useful method. As the performance of oil spill on SAR images is similar with other
oceanic phenomenon, it is difficult to distinguish between oil spill and "look-alikes". This paper presents a novel multi-
level method to extract the oil film from original SAR image. The method can retain original edge information of oil
film while separated it from the background. This paper applied lee filter, fuzzy c means, coherence filter and morphological
operations to de-noise and segment SAR image. Experimental results show that the method can not only distinguish
oil film and the sea, but revert part of edge information lost in the process of de-noising and the segmentation.
Aimed projective distortion caused by tilted camera in traditional photogrammetry, a projective distortion self-corrected
photogrammetry is proposed used to measure the projectile coordinates. The causes of projective distortion and
the impact on the measurement accuracy are analyzed. Through the analysis of the imaging model of random points on
the target, The formula to solve the object distance is deduced. The correspondence between the coordinates on the
image and the coordinate on the target of the projectiles is established. According to Extraction and calculation of 4
markers sited on the target, the constant in the correspondence is solved. The measurement error caused by projective
distortion is overcome. The absolute measurement of projectile coordinate on the target is realized. The test result shows
that the projectile coordinate on the target can be accurately measured using the method designed in this paper. The
accuracy and consistency of fire intensity measuring is increased. It also can be used to check the other related automatic
This paper presents an algorithm to detect the eyes and mouth of the faces under various imaging conditions such as
with different poses, dimensions, illuminations, resolutions, wearing glasses or not and having different expressions is
proposed. Initially, the algorithm converted the face region detected by the Viola-Jones algorithm into gray-scale and
reduced the resolution by decimation to achieve low resolution region. The pixels that are darker than their surroundings
are later located by 8 individual filters. Finally, the center of eyes and mouth are searched using a triangle model.
Experiment show that, the algorithm produced 98.7% accuracy in detecting the eyes and mouth.
An image registration method is proposed in this paper for accurately aligning two images of the same scene
captured simultaneously by visible CCD and IR (infrared) cameras. In image fusion systems, CCD and IR sensors are
physically aligned as closely as possible and yet significant image mis-alignment remains due to differences in field of
view, lens distortion and other camera characteristics. So, affine transformation is considered for aligning two images. At
first, the corresponding feature point pairs are selected manually and that are used to calculate transform coefficients.
Then, the value of the transform coefficients are optimized further by MI (mutual information) in global image. After the
unregistered image is transformed by optimal transform coefficients, the accurately registered image can be obtained.
The experimental results demonstrate that this method can provide highly accurate registered images for image fusion.
Super-resolution image reconstruction aims to obtain a high-resolution image from multiple low-resolution images to
get a better visual effect. It is widely used in many applications such as video surveillance, image recognition. This paper
proposes a POCS super-resolution image reconstruction algorithm based on the projection residue to improve the visual
effect of reconstructed HR image. The characteristic of this method is that it takes full advantage of the statistical
property of projection residue included in the constraints and changes the modified threshold adaptively. The cause of
the projection residue is analyzed and its characteristic parameter is calculated to restrain the solution. Experimental
results show that our algorithm is effective in visual evaluation, and the PSNR is improved.
Using optical sensor array, a precision motion control system in a conveyer follows the irregular shaped leather sheet
to measure its surface area. In operation, irregular shaped leather sheet passes on conveyer belt and optical sensor array
detects the leather sheet edge. In this way outside curvature of the leather sheet is detected and is then feed to the
controller to measure its approximate area. Such system can measure irregular shapes, by neglecting rounded corners,
To minimize the error in calculating surface area of irregular curve to the above mentioned system, the motion
control system only requires the footprint of the optical sensor to be small and the distance between the sensors is to be
In the proposed technique surface area measurement of irregular shaped leather sheet is done by defining velocity
and detecting position of the move. The motion controller takes the information and creates the necessary edge profile on
point-to-point bases. As a result irregular shape of leather sheet is mapped and is then feed to the controller to calculate
A novel and efficient method called marked bounding box method based on marching cubes is presented for the
point cloud data reduction of sole patterns. This method is characterized in that each bounding box is marked with an
index during the process of data reduction and later for use of data reconstruction. The data reconstruction is
implemented from the simplified data set by using triangular meshes, the indices being used to search the nearest points
from adjacent bounding boxes. Afterwards, the normal vectors are estimated to determine the strength and direction of
the surface reflected light. The proposed method is used in a sole pattern classification and query system which uses
OpenGL under Visual C++ to render the image of sole patterns. Digital results are given to demonstrate the efficiency
and novelty of our method. Finally, conclusion and discussions are made.
Multiple-exposure-based methods have been an effective means for high dynamic range (HDR) imaging technology.
The current methods are greatly dependent on tone mapping, and most of them are unable to accurately recover the local
details and colors of the scene. In this work, we present a novel HDR method by using multiple image cues for the image
merging process. Firstly, all the images with various exposure times are divided into some uniform sub-regions and an
exposure estimation technique is implemented to judge the well exposed one. With all the image blocks have best
exposing quality are selected, a blending function is proposed to remove the transition boundaries between these blocks.
A fidelity metric index is introduced to assess the final fusion image, and experimental results on public image libraries
are given to demonstrate its high performance.
A new method of image denoising was presented via combining contourlet transform and the stationary wavelet
transform which has an iterative filer bank with stationary wavelet transform and the directional filter banks (DFB). The
experimental results show that in terms of image denoising, the proposed method can suppress speckle in SAR images
effectively while preserving the edge features and textural information of the scene.
It's reviewed that the measurement system of road surface profile and the calculation method of segment road test data
have been introduced. Because of there are sudden vertical steps at the connection points of segment data which will
influence the application of road surface data in automotive engineering. So a new smooth connection method of segment
test data is proposed which revised the sudden vertical steps connection by the Signal Local Baseline Adjustment
(SLBA) method. Besides, there is an actual example which mentioned the detailed process of the smooth connection of
segment test data by the SLBA method and the adjusting results at these connection points. The application and calculation
results show that the SLBA method is simple and has achieved obvious effect in smooth connection of the segment
road test data. The method of SLBA can be widely applied to segment road surface data processing or the long period
vibration signal processing.
The aim of this paper is to provide the line drawing algorithm which is accurate and effective in dissimilar hardware
platforms and different application requirements. The most famous algorithm to draw a straightness line in a smooth
fashion is Bresenham algorithm. It is advantageous that the classic Bresenham algorithm processes are all integer
numeric without division and decimal fraction. Meanwhile, it is imperfect that the algorithm generates only one pixel at a
computation time. So a common inevitable phenomenon of the algorithm is its slow efficiency to some extend. Firstly,
this paper fully analyzes recent researches of the Bresenham Line Drawing algorithm. Secondly, in this paper we give
full attention to the initiative relation between line generation model and its linear slope, and then we present an
improved algorithm which can generate pixels of a line row-major by raster graphics display device. What needs to
stress, the core principle of the improved algorithm is utilizing the counterpart of both ends of line and the symmetry for
segments. Thirdly, after discussing theory and structure, the improved algorithm implementation and simulation are
given. The corresponding project, it means that the hardware acceleration in the use of circular subtraction technology
based on shift register was briefly described. Finally, results were presented to demonstrate that the new algorithm
inherits the advantage of classic Bresenham algorithm without division and decimal fraction, the speed has been
increased greatly, and it is easy to implement by hardware.
The current most desirable image retrieval feature is retrieving images based on their semantic content. In order to
improve the retrieval accuracy of content-based image retrieval systems, research focus has been shifted from designing
sophisticated low-level feature extraction algorithms to reducing the 'semantic gap' between the visual features and the
richness of human semantics. In this paper, we put forward a system framework of image retrieval based on content and
ontology, which has the potential to fully describe the semantic content of an image, allowing the similarity between
images and retrieval query to be computed accurately. In the system, we identify third major categories of techniques in
narrowing down the "semantic gap": (1) using object ontology to define high-level concepts; (2) using machine learning
methods to associate low-level features with query concepts; (3) using ontology reasoning to extend image retrieval.
Finally, the paper does some testing experiment, whose result shows the feasibility of the system framework.
Combined with the newest Machine Vision technology, various characteristic algorithms for border detection are introduced. The norm for the quality of border detection algorithms and the direction of the new algorithms are proposed in the end.
A new fuzzy based thresholding method for medical images especially cervical cytology images having blob and
mosaic structures is proposed in this paper. Many existing thresholding algorithms may segment either blob or mosaic
images but there aren't any single algorithm that can do both. In this paper, an input cervical cytology image is binarized,
preprocessed and the pixel value with minimum Fuzzy Gaussian Index is identified as an optimal threshold value and
used for segmentation. The proposed technique is tested on various cervical cytology images having blob or mosaic
structures, compared with various existing algorithms and proved better than the existing algorithms.
This paper aims to solve the problem of detecting ghost object; which is a common problem in background
subtraction algorithm. Ghost object is the false object detected which is not corresponding to any actual object in current
image. In this work, we proposed ghost detection and removal method using color similarity comparison. Proposed
solution is designed based on the assumption that ghost problem occurs due to the existence of the object in background
image instead of in the current image. We are using color similarity between detected foreground area and its
surrounding area to first determine whether the object appear in background or current image, consequently identify
whether the detected object is a ghost or an actual object. Proposed solution has been tested using various datasets
including PETS2001 and own datasets and it is proved that the proposed method is able to solve ghost problem.
In this paper, a variational method for motion segmentation has been proposed, which is realized by first estimate
discontinuity-preserving optical flow field and then segmenting. Specifically, the complementarity between estimation
and segmentation was emphasized, so two main improvements were presented: firstly, a segmentation-oriented diffusion
tensor has been suggested in the optical flow model, which combines the image and flow information and determine the
boundaries more effectively; Secondly, according to the characteristic of vector field, a motion segmentation model with
considerations of both motion boundaries and region information were established. The results were illustrated with
examples of sequence images which show the usefulness of the proposed approach for various problems.
Robust image partitioning technique with limited space-time requirement is crucial for real-time high-resolution
mobile imaging applications. Simultaneous handling of color components is a stumbling block in segmentation of color
image. This paper presents a strategy for segregating multi-dimensional complexity that resembles human visual
perception. First-order regions are developed employing the distribution of base color (hue) creating meaningful
partitions of the image. Each region being self-contained can be transmitted independently. Further segmentations on
each hue region can be done concurrently with respective local distributions of saturation (S). Finally, each of the subregions
can undergo a third-order segmentation based on intensity (I) distribution at local level. Experiments indicate
that the segregation process improves throughput and quality of segmentation. Self-contained partitions are convenient
for multi-host image sharing as well as progressive reconstruction of images in the receiver.
Automatic brain tissue segmentation is a crucial task in diagnosis and treatment of medical images. This paper
presents a new algorithm to segment different brain tissues, such as white matter (WM), gray matter (GM), cerebral
spinal fluid (CSF), background (BKG), and tumor tissues. The proposed technique uses the modified intraframe coding
yielded from H.264/(AVC), for feature extraction. Extracted features are then imposed to an artificial back propagation
neural network (BPN) classifier to assign each block to its appropriate class. Since the newest coding standard,
H.264/AVC, has the highest compression ratio, it decreases the dimension of extracted features and thus yields to a more
accurate classifier with low computational complexity. The performance of the BPN classifier is evaluated using the
classification accuracy and computational complexity terms. The results show that the proposed technique is more robust
and effective with low computational complexity compared to other recent works.
Basic aim of our study is to analyze the medical image. In computer vision, segmentationRefers to the process of
partitioning a digital image into multiple regions. The goal ofSegmentation is to simplify and/or change the
representation of an image into something thatIs more meaningful and easier to analyze. Image segmentation is typically
used to locateObjects and boundaries (lines, curves, etc.) in images.There is a lot of scope of the analysis that we have
done in our project; our analysis couldBe used for the purpose of monitoring the medical image. Medical imaging refers
to theTechniques and processes used to create images of the human body (or parts thereof) forClinical purposes (medical
procedures seeking to reveal, diagnose or examine disease) orMedical science (including the study of normal anatomy
and function).As a discipline and in its widest sense, it is part of biological imaging and incorporatesRadiology (in the
wider sense), radiological sciences, endoscopy, (medical) thermography, Medical photography and microscopy (e.g. for
human pathological investigations).Measurement and recording techniques which are not primarily designed to produce
Birefringence of asymmetric photonic crystal fiber is investigated using the finite element method, under the
circumstance of perturbation. By comparing the value of birefringence on the different value of large air-hole, the results
indicate that when the intrinsic-birefringence is smaller, the random offset of hole-position has large influence on the
birefringence. With the intrinsic-birefringence increasing, the little effect on birefringence regardless of the offset of
hole-position or the variation of hole-diameter.
Human face detection plays a vital role in many applications like video surveillance, managing a face image
database, human computer interface among others. This paper proposes a robust algorithm for face detection in still color
images that works well even in a crowded environment. The algorithm uses conjunction of skin color histogram,
morphological processing and geometrical analysis for detecting human faces. To reinforce the accuracy of face
detection, we further identify mouth and eye regions to establish the presence/absence of face in a particular region of
The illumination variation problem is one of the well-known problems in face recognition in uncontrolled
environment. Due to that both Gabor feature and LTP(local ternary pattern) are testified to be robust to illumination
variations, we proposed a new approach which achieved illumination variable face recognition by combining Gabor
filters with LTP operator. The experimental results compared with the published results on Yale-B and CMU PIE face
database of changing illumination verify the validity of the proposed method.
This paper proposes a Swarm Intelligence based Memetic algorithm for Task Allocation and scheduling in
distributed systems. The tasks scheduling in distributed systems is known as an NP-complete problem. Hence, many
genetic algorithms have been proposed for searching optimal solutions from entire solution space. However, these
existing approaches are going to scan the entire solution space without considering the techniques that can reduce the
complexity of the optimization. Spending too much time for doing scheduling is considered the main shortcoming of
these approaches. Therefore, in this paper memetic algorithm has been used to cope with this shortcoming. With regard
to load balancing efficiently, Bee Colony Optimization (BCO) has been applied as local search in the proposed memetic
algorithm. Extended experimental results demonstrated that the proposed method outperformed the existing GA-based
method in terms of CPU utilization.
In this paper, an extended projection temporal logic(EPTL), based on a primitive operator prj , is formalized. Further,
as an executable subset of EPTL, an object-oriented MSVL is presented, which extends the temporal logic programming
language MSVL to support object, class, aliasing, inheritance and overloading features. An example of modeling and
simulating digital signal processing is given to illustrate how to use and execute the language.
A new hybrid genetic algorithm is generated in this paper, which is based on the simple genetic algorithm. In this
algorithm, some genetic operators such as crossover operator are improved. In the crossover operator, the crossover
method based on threshold and the two-points-crossover method are combined into a new hybrid crossover method. An
example which is Resource-Constrained Project Scheduling Problem (RCPSP) is given, whose activity network, the
execution time and the number of resource required for each activity, selection and crossover operator are also referred.
In addition, there are examples to prove the superior of the new algorithm, which is benefit to speed up the evolution and
get the optimal solution.
By analyzing the dynamic scheduling needs of its inherent nature, made wearable computing based on human-computer
natural interaction forms the basis of EOTAS dynamic scheduling methods, and the targeted building, a new
concept of wearable man-machine cooperative forms, turn around its concrete implementation and application, a color
based on extended fuzzy Petri net EOTAS dynamic scheduling method for the preliminary settlement of the business
operating environment EOTAS field applications of the fast scheduling problem.
With the rapid development of information technology and extensive requirement of network resource sharing,
plenty of resource hotlinking phenomenons appear on the internet. The hotlinking problem not only harms the interests
of legal websites but also leads to a great affection to fair internet environment. The anti-leech technique based on
session identifier is highly secure, but the transmission of session identifier in plaintext form causes some security flaws.
In this paper, a proxy hotlinking technique based on session identifier is introduced firstly to illustrate these security
flaws; next, this paper proposes an improved anti-leech mechanism based on session identifier, the mechanism takes the
random factor as the core and detects hotlinking request using a map table that contains random factor, user's
information and time stamp; at last the paper analyzes the security of mechanism in theory. The result reveals that the
improved mechanism has the merits of simple realization, high security and great flexibility.
A remote video monitoring system based on Java Media Framework (JMF) is put forward in this paper. It is of cross-platform,
lower time-delay and lower bandwidth. The system is consisted of three layers that are data acquisition layer,
service layer and client layer. The hardware of system is connected with local area network and various video devices
can be identified in the system. The software based on Java and JMF is programmed to capture, compress, send, receive
and play video data and can be run on different operating system without modification. H.263 compression algorithm is
adopted and RTP protocol is used to transport video data with RTCP protocol in the system. The client layer can access
to the system by Internet or 3G and has convenient and flexible features. Maintenance personnel can easily supervise the
device status at any time so that the equipments are always in good condition. It is helpful to enhance the competitive
power of power plants.
This paper concerns the problem of video stitching automatically in a multi-camera surveillance system. Previous
approaches have used multiple calibrated cameras for video mosaic in large scale monitoring application. In this work,
we formulate video stitching as a multi-image registration and blending problem, and not all cameras are needed to be
calibrated except a few selected master cameras. SURF is used to find matched pairs of image key points from different
cameras, and then camera pose is estimated and refined. Homography matrix is employed to calculate overlapping pixels
and finally implement boundary resample algorithm to blend images. The result of simulation demonstrates the
efficiency of our method.
On the asis of User Datagram Protocol (UserDatagram Protocol, UDP), to do some improvement and design a
welding robot network communication protocol (welding robot network communicate protocol: WRNCP), working on
the fields of the transport layer and application layer of TCP / IP protocol. According to the characteristics of video data,
to design the radio push-type (Broadcast Push Model, BPM) transmission method, improving the efficiency and stability
of video transmission.and to designed the network information transmission system, used for real-time control of
welding robot network.
Multi-cue-based Camshift algorithm is presented for object tracking in complex scenes. Because color-based
Camshift algorithm suffers from objects with color clutter in backgrounds, this paper integrates motion cue with color
cue to extend its application areas. Moreover, the robust real-time multiresolution algorithm is used to obtain motion
information in dynamic background. Experimental work verifies that the proposed multiple cues strategy really improves
the tracking performance of the classical Camshift using a single cue.
License plate recognition (LPR) system play an important role in intelligent transportation systems (ITSs). It is difficult
to locate a license plate in complex scene. Our location strategy integrates blue region, vertical texture and contrast
features of LP in the framework of improved visual attention model. We improve visual attention model by changing
normalization and linear combination into feature image binarization and logical operation. Multi-scale center-surround
differences mechanism in visual attention model make the feature extraction robust. Tests on pictures captured by different
equipments under different environments give delightful result, the success rate for location is as high as 95.28%.
This paper introduces a face verification system implementation. In feature extraction, the algorithm is based on a
classical texture descriptor, Local Binary Patterns (LBP). In decision making, a new method is proposed to determine the
Client Dependent threshold (CD-th). Compared with the traditional fixed threshold, it significantly reduces error rate.
Moreover, a symmetry factor is defined to increase frontal face detection rate. And a storage mode is designed to reduce
time consumption in feature extraction. The implemented face verification system requires only one sample per person,
and overcomes the difficulties appearing in multi-sample face verification system, including image capture problem,
storage limitation and time-consumption. The experiments demonstrate the effectiveness of our proposed system.
Subscriber radio location techniques for code division multiple access (CDMA) cellular networks has been studied
extensively in recent years. The network-based angle of arrival (AOA), time difference of arrival (TDOA), and time of
arrival (TOA) techniques offer solutions to the position estimation problem. In this paper, The signal processing scheme
of wireless location system based IS-95 is presented, in which TOA of reverse access channel transmissions is measured
using sub-correlation detection algorithms and TOA estimation accuracy is improved using second search. Furthermore,
reverse access channel decoding is implemented used to identify access channel message type, mobile identification, and
With the rapidly increasing of biomedical literature, the deluge of new articles is leading to information overload.
Extracting the available knowledge from the huge amount of biomedical literature has become a major challenge.
GDRMS is developed as a tool that extracts the relationship between disease and gene, gene and gene from biomedical
literatures using text mining technology. It is a ruled-based system which also provides disease-centre network
visualization, constructs the disease-gene database, and represents a gene engine for understanding the function of the
gene. The main focus of GDRMS is to provide a valuable opportunity to explore the relationship between disease and
gene for the research community about etiology of disease.
Large-scale and high-quality Video-On-Demand technology has been a hot topic to researchers. In recent years, the
development of cloud computing theory and virtualization cluster technology also provides a new solution of thoughts to
the construction of large-scale Video-On-Demand system. This paper presents a design of Distributed Video-On-
Demand System Based on Cloud Computing (DCC-VOD System), which can be widely applied to various large and
medium network environment. It also introduces the components of the core technology, focusing on the implementation
of the load balancing server studies for distributed system. Finally, results of comparison tests in the simulated
environment show that this system not only can enhance the server performance and resource utilization, but also
increase the number of accessed users as much as possible. It greatly improves the reliability and stability of the system,
and achieves high cost-effective to meet the current needs of the variety of medium-sized and even the large-scale-sized
network environment of video-on-demand.
Fingerprint identification is one of the most important biometric technologies. The performance of the minutiae
extraction and the speed of the fingerprint verification system rely heavily on the quality of the input fingerprint images,
so the enhancement of the low fingerprint is a critical and difficult step in a fingerprint verification system. In this paper
we proposed an effective algorithm for fingerprint enhancement. Firstly we use normalization algorithm to reduce the
variations in gray level values along ridges and valleys. Then we utilize the structure tensor approach to estimate each
pixel of the fingerprint orientations. At last we propose a novel algorithm which combines the advantages of onedimensional
Gabor filtering method and anisotropic method to enhance the fingerprint in recoverable region. The
proposed algorithm has been evaluated on the database of Fingerprint Verification Competition 2004, and the results
show that our algorithm performs within less time.
The system uses machine vision technology to inspect logarithmic spiral bevel gears. This is a new non-contact
measurement technique with high precision and efficiency. Use two cameras in a different location to shoot two images
of gear, and collect gear images into the computer. Use correspondence between feature points of two images and optical
imaging geometric model to solve three-dimensional coordinates of the tooth surface points. Then, discriminate the tooth
shape by comparing to ideal surface point parameters. This kind of inspection method flexibility and provides technical
support for processing, detection and correction of logarithmic spiral bevel gears.
We present a sensor fusion framework for real-time tracking applications combining inertial sensors with a camera.
In order to make clear how to exploit the information in the inertial sensor, two different fusion models gyroscopes only
model and accelerometers model are presented under extended Kalman filter framework. Gyroscopes only model uses
gyroscopes to support the vision-based tracking without considering acceleration measurements. Accelerometers model
utilizes both measurements from the gyroscopes, accelerometers and vision data to estimate the camera pose, velocity,
acceleration and sensor biases. Synthetic data and real image experimental sequences show dramatic improvements in
tracking stability and robustness of estimated motion parameters for gyroscope model, when the accelerometer
measurements exist drift.
Recognition of handwritten Uighur word is important for Uighur information automation and new generation
handwritten input system development on mobile platform. Robust and accurate handwritten character segmentation
algorithm provides an important prerequisite for Uighur recognition. Based on the comprehensive consideration of
computation, robustness and the characteristics of the text itself, a simple but effective handwritten Uighur character
segmentation algorithm is proposed. Furthermore, we develop an Uighur input system on the intelligent mobile platform,
and construct a medium scale Uighur handwritten word database simultaneously. The segmentation algorithm is detailed
evaluated on the database and the extensive experiments demonstrate the robustness and efficiency of the proposed
An ID-based authenticated group key agreement (AGKA) protocol allows a group of members to share a key and
provide an assurance of key sharing with an intended group with the user's identity, which is used for conferencing
environments. In 2004, Choi et al proposed an ID-based authenticated group key agreement with bilinear maps (also
called CHL protocols), which was extended from Burmester and Desmedt conference key agreement protocols.
Unfortunately, their protocols were found to be vulnerable to the insider attacks in which cases that the two malicious
users have the previous authentication transcripts of the party by Zhang, Chen and Shim. In this paper, we proposed an
improved ID-based AGKA. In our scheme, each session has a unique session identity which is published by Key
Generation Center. With such unique session identity binding to each session, our protocols can prevent the insider
attack. Especially, our protocols can not enhance the computationally cost and it is still efficient.
This paper proposes a new paradigm for the design of cryptographic filesystems. Traditionally, cryptographic file
systems have mainly focused on encrypting entire files or directories. In this paper, we envisage encryption at a finer
granularity, i.e. encrypting parts of files. Such an approach is useful for protecting parts of large files that typically
feature in novel applications focused on handling a large amount of scientific data, GIS, and XML data. We extend prior
work by implementing a user level file system on Linux, UsiFe, which supports fine grained encryption by extending the
popular ext2 file system. We further explore two paradigms in which the user is agnostic to encryption in the underlying
filesystem, and the user is aware that a file contains encrypted content. Popular file formats like XML, PDF, and
PostScript can leverage both of these models to form the basis of interactive applications that use fine grained access
control to selectively hide data. Lastly, we measure the performance of UsiFe, and observe that we can support file
access for partially encrypted files with less than 15% overhead.
Support Vector Machine (SVM) based on Structural Risk Minimization (SRM) of Statistical Learning Theory has
excellent performance in fault diagnosis. However, its training speed and diagnosis speed are relatively slow. Signed
Directed Graph (SDG) based on deep knowledge model has better completeness that is knowledge representation ability.
However, much quantitative information is not utilized in qualitative SDG model which often produces a false solution.
In order to speed up the training and diagnosis of SVM and improve the diagnostic resolution of SDG, SDG and SVM
are combined in this paper. Training samples' dimension of SVM is reduced to improve training speed and diagnosis
speed by the consistent path of SDG; the resolution of SDG is improved by good classification performance of SVM.
The Matlab simulation by Tennessee-Eastman Process (TEP) simulation system demonstrates the feasibility of the fault
diagnosis algorithm proposed in this paper.
Ever since the concept of analog network coding(ANC) was put forward by S.Katti, much attention has been
focused on how to utilize analog network coding to take advantage of wireless interference, which used to be
considered generally harmful, to improve throughput performance. Previously, only the case of two nodes that need
to exchange information has been fully discussed while the issue of extending analog network coding to more
than three nodes remains undeveloped. In this paper, we propose a practical transmission scheme to extend
analog network coding to more than two nodes that need to exchange information among themselves. We start
with the case of three nodes that need to exchange information and demonstrate that through utilizing our algorithm,
the throughput can achieve 33% and 20% increase compared with that of traditional transmission scheduling
and digital network coding, respectively. Then, we generalize the algorithm so that it can fit for occasions with any
number of nodes. We also discuss some technical issues and throughput analysis as well as the bit error rate.
There are usual complex causal relationships among faults to constitute complex system. Based on the introduction
of Cellular Automata (CA) and its evolution theory, fault-relating pattern analysis method based on CA was studied.
Application of CA for fault-relating pattern was proposed. Extended CA (ECA) and its algorithm were also proposed.
Simulation and analysis for evolvement of complex process of multi-fault relating chain were realized. The example
analysis of mechanical system fault-relating based on CA shows that the proposed ECA method breaks through limit of
spatial rules and homogeneity homogeneous of CA and provides a suitable analytical tool for researches on fault-relating
pattern. The preliminary study and explore results show that the method can effectively reveal the association patterns
among many fault in the system.
The article focus on an application of chemical engineering. A fuzzy modeling methodology designed to determinate
two relevant characteristics of a chemical compound (ferrocenylsiloxane polyamide) for self-assembling - surface
tension and maximum UV absorbance measured as temperature and concentration functions. One of the most important
parts of a fuzzy rule-based inference system for the polyamide solution characteristics determinations is that it allows to
interpret the knowledge contained in the model and also to improve it with a-priori knowledge. The results obtained
through proposed method are highly accurate and its can be optimized by utilizing the available information during the
modeling process. The results showed that it is feasible in theory and reliable on calculation applying Mamdani fuzzy
inference system to the estimation of optical and surface properties of a polyamide solution.
The OWL (Web Ontology Language) is the de facto standard ontology description language used by the Semantic
Web. Because OWL is mainly designed for use by applications that need to process the content of information, it is
difficult to read and understand by domain experts to build or verify domain ontologies expressed by OWL. ORM
(Object Role Modeling) is a conceptual modeling language with graphical notations, its models/schemas can be
translated into pseudo natural language that make it easier, also for domain experts who is a non-IT specialist, to create,
check and adapt the knowledge about the UoD (Universe of Domain). Based on formal logic analysis of OWL DL and
ORM and extending ORM notations, mapping rules has been presented to visualize OWL DL ontologies with ORM.