This PDF file contains the front matter associated with SPIE Proceedings Volume 8350, including the Title Page, Copyright information, Table of Contents, Introduction, and Conference Committee listing.
The traditional safety evaluation of urban rail transit operation was limited to the station or line, not from the
perspective of the whole network operation safety. Specific to the characteristics of the urban rail transit network operation
in the new situation, based on complex network theory, the urban rail transit network model was established, and the
formalized description of the network model was given. Based on above, from the passenger traffic, environment and other
aspects, the safety evaluation index of urban rail transit network operation was established, which included hidden trouble
indexes, accident indexes and safety economic index, aiming at to provide support for overall safety evaluation.
For the current Internet information access of contradictions and difficulties, the study on the basis of the data mining
technique and recommender system, propose and implement a facing internet personalization information
recommendation system based on data mining. The system is divided into offline and online, offline part to complete the
from the site server log files access the appropriate online intelligent personalized recommendation service transaction
mode, using the association rules mining. The online part, realizes personalized intelligence recommendation service
based on the connection rule excavation. Provides the personalization information referral service method based mining
association rules, And through the experiment to this system has carried on the test, has confirmed this system's
feasibility and the validity.
The multi attributes decision model is presented basing on a number of indicators of book procurement bidders, and
by the characteristics of persons to engage in joint decision-making. For each evaluation to define the ideal solution and
negative ideal solution, further the relative closeness of each evaluation person and each supplier. The ideal solution and
negative ideal solution of the evaluation committee is defined based on the group closeness matrix, and then the results
of the ultimate supplier evaluation are calculated by decision-making groups. In this paper, the model is through the
application of experimental data.
The scheduling problem in distributed systems is known as an NP-complete problem, and methods based on heuristic
or metaheuristic search have been proposed to obtain optimal and suboptimal solutions. The task scheduling is a key
factor for distributed systems to gain better performance. In this paper, an efficient method based on memetic algorithm
is developed to solve the problem of distributed systems scheduling. With regard to load balancing efficiently, Artificial
Bee Colony (ABC) has been applied as local search in the proposed memetic algorithm. The proposed method has been
compared to existing memetic-Based approach in which Learning Automata method has been used as local search. The
results demonstrated that the proposed method outperform the above mentioned method in terms of communication cost.
With the development of the peer-to-peer (P2P) technology, file sharing is becoming the hottest, fastest growing
application on the Internet. Although we can benefit from different protocols separately, our research shows that if there
exists a proper model, most of the seemingly different protocols can be classified to a same framework. In this paper, we
propose an improved Chord arithmetic based on the binary tree for P2P networks. We perform extensive simulations to
study our proposed protocol. The results show that the improved Chord reduces the average lookup path length without
increasing the joining and departing complexity.
Optical flow algorithm proposed by Horn and Schunck (OFCE-HS) in 1981 was the first technique and one of the
best performers for motion estimation. Attempts to implement OFCE-HS into real-time hardware have been performed
by researchers. Hardware architecture of OFCE-HS proposed by Martin et al. with full integer for all calculations is one
of the attempts. However, the hardware architecture has a significant drawback because it requires two dividers which
decrease the speed of the system, increase the use of resources and add errors in the truncation of the least significant
bits. To overcome this problem, new proposed hardware architecture of OFCE-HS is presented in this paper. With
calculations using a combination of integer and fractional arithmetic, it is possible to reduce the number of dividers and
to improve its performance. The goal of this work is to design hardware architecture of OFCE-HS which can increase the
speed of the system, lower resource utilization and achieve good precision and accuracy compared previous works.
Knowledge sharing is done through various knowledge sharing forums which requires multiple
logins through multiple browser instances. Here a single Multi-Forum knowledge sharing concept is introduced
which requires only one login session which makes user to connect multiple forums and display the data in a single
browser window. Also few optimization techniques are introduced here to speed up the access time using cloud
Nowadays, the relation model faces the challenge of being applied to massively distributed databases and cloud
databases. It can not be easily scaled out in such computing environments. The main reason is lack of a proper data
distribution unit and a uniform data distribution model. In this paper, a new data distribution model is proposed. As
semantic clusters of data, data multitrees are taken as the distribution units. Schema multitree and data multitree are
defined, and then a method of designing the schema graph is proposed to ensure that the data graph is a data multitree.
Three theorems proved the correctness of the proposed method. Since relational databases can be viewed as data
multitrees, the sematic related data can be split or unified together easily with multiree operations, the scalability of
relational model can be improved. In addition, this data distribution model is transparent to programmers.
A Mobile Adhoc Network (MANET) is characterized by mobile nodes, multihop wireless connectivity,
infrastructureless environment and dynamic topology. A recent trend in Ad Hoc network routing is the reactive ondemand
philosophy where routes are established only when required. Stable Routing is of major concern in Ad hoc
routing. Security and Power efficiency are the major concerns in this field. This paper is an effort to use security to
achieve more reliable routing. The ad hoc environment is accessible to both legitimate network users and malicious
attackers. The proposed scheme is intended to incorporate security aspect on existing protocols. The study will help in
making protocol more robust against attacks to achieve stable routing in routing protocols.
In this paper, we propose a parallel chaos-based encryption scheme in order to take advantage of the dual-core
processor. The chaos-based cryptosystem is combinatorially generated by the logistic map and Fibonacci sequence.
Fibonacci sequence is employed to convert the value of the logistic map to integer data. The parallel algorithm is
designed with a master/slave communication model with the Message Passing Interface (MPI). The experimental results
show that chaotic cryptosystem possesses good statistical properties, and the parallel algorithm provides more enhanced
performance against the serial version of the algorithm. It is suitable for encryption/decryption large sensitive data or
Nowadays, the web has enabled an explosive growth of information sharing (there are currently over 4 billion pages
covering most areas of human endeavor) so that the web has faced a new challenge of information overhead. The
challenge that is now before us is not only to help people locating relevant information precisely but also to access and
aggregate a variety of information from different resources automatically. Current web document are in human-oriented
formats and they are suitable for the presentation, but machines cannot understand the meaning of document. To address
this issue, Berners-Lee proposed a concept of semantic web. With semantic web technology, web information can be
understood and processed by machine. It provides new possibilities for automatic web information processing. A main
problem of semantic web information retrieval is that when these is not enough knowledge to such information retrieval
system, the system will return to a large of no sense result to uses due to a huge amount of information results. In this
paper, we present the architecture of information based on semantic web. In addiction, our systems employ the inference
Engine to check whether the query should pose to Keyword-based Search Engine or should pose to the Semantic Search
From locating techniques in the wireless sensors networks two groups based on harbor and without it can be referred
to. Firstly, the harbor nodes distribute the local information in the network and through that the average distance between
two groups or the average length of a step is identified. Non-harbor nodes know the shortest path as the number of steps
to each of the harbors and determine their distance to the harbors by understanding this average step length and using this
estimation compute their location distance. Firstly, the network nodes are clustered. Each harbor is a cluster head and the
cluster members using information derived from this cluster head begin locating. This process starts by the nodes located
in the common field between two clusters. Although algorithm comparability based on harbor is increased by the nodes
clustering, but the algorithm precise and efficiency is still dependent on the number of harbor nodes. Using harbor in all
of the conditions causes its usage limitation in the wireless sensor networks.
Regarding the algorithms without needing to harbor, algorithm is the first case. This algorithm has invented a new
method to make a local graph in the network which is applicable in computing the relative features of nodes. Firstly,
each node makes a graph with its own axis. Then the general graph of network is made and each node changes its
coordinates by using an algorithm. Because of the current limitations in the trigonometry method used in this algorithm,
the computed coordinates are not reliable and face difficulties in many cases. The other algorithms being needless to
harbor try to use another methods instead of trigonometry methods in locating.
For instance, among these methods, those ones based on graph drawing or mass and coil algorithms can be referred
to. These kinds of algorithms take much time and use a lot of energy. In order to upgrade the algorithm results quality
and prevent the fault distribution, we define a secondary parameter called the computed location accuracy. This
parameter indicates location accuracy and can be a value between zero and one.
The purpose of this paper is to classify the sole patterns from a 3D shoe model which is comprised of scattered point
cloud data. Sole patterns can be divided into five categories based on the texture of each pattern. The point cloud data is
sliced into a number of layers, and the unordered data points in each layer are projected onto a viewing plane to get a 2D
shoeprint, in which we can further segment a texture element by region growing. Then, each texture element segmented
can be classified into two types, non-closed curve and closed curve, by detecting if there are point cloud data in each
external unit of the region and looking for the nearest points to the region. Finally, we can identify the type of the texture
element into one of the five categories by analyzing its geometrical characteristics.
The former fault analysis on RSA with "Left-to-Right" was based on modifying the public modulus N, but it is
difficult to be taken in practice. In order to find a more practical attack method, considering the characteristic that the
multiplier of microprocessor is easy to affect by voltage, the fault can be injected into the multiplier during the RSA
signature by adjusting the voltage. This paper proposes a new fault analysis on RSA signature based the error with
multiplier, improving the feasibility of attack, and extends the attack to RSA with fixed-window algorithm. In the end,
the complexity of algorithm is analyzed; the expansibility and feasibility of algorithm are proved by demonstrating in
theory and simulation experiments. The results of experiment show that the new fault analysis algorithm is more
practical in operation.
Software development effort is one of the most important metrics in field of software engineering. Since accurate
estimating of this metric affects the project manager plans, numerous research works have been performed to increase
the accuracy of estimations in this field. Almost all the previous publications in this area used several project features as
independent features and considered the development effort as dependent one. Constructive Cost Model (COCOMO) is
the most famous algorithmic model for estimating the software development effort. Despite the fact that many
researchers have tried to improve the performance of COCOMO using non-algorithmic methods, all of which have
estimated the development effort regardless of the project type. In this paper, the effect of considering the project type in
estimating was investigated by means of neural networks. The obtained results were compared with the original
COCOMO and neural network. The comparisons showed that the software project type can affect the accuracy of
Detecting and tracking space objects in video sequences is a challenging task of wide interest. In this paper, a
comprehensive framework for detecting and tracking space objects is presented. Unlike the traditional linear structure of
tracking after detection, this framework also allows detection after tracking. What is more, the combination of the level
set and the frame subtraction algorithms in the tracking subsystem makes detection and tracking of a space object during
an entire video sequence a reality. Experimental results on 15 videos generated by STK show robust tracking under both
star background and earth background.
Organizations, to have a competitive edge upon each other, resort to business intelligence which refers to information
available for enterprise to make strategic decisions. Data warehouse being the repository of data provides the backend for
achieving business intelligence. The design of data warehouse, thereby, forms the key, to extract and obtain the relevant
information facilitating to make strategic decisions. The initial focus for the design had been upon the conceptual models
but now object oriented multidimensional modelling has emerged as the foundation for the designing of data warehouse.
Several proposals have been put forth for object oriented multidimensional modelling, each incorporating some or other
features, but not all. This paper consolidates all the features previously introduced and the new introduced, thus,
proposing a new model having features to be incorporated while designing the data warehouse.
In order to improve the accuracy of the Single-axial Rotation of INS (SRINS), the idea of the level damping of the
platform INS is introduced to the system, and the principle of the damping is offered. On the basic of analyzing on both
of inner level damping and outer level damping, the mixed level damping is put forward. The results show that by
introducing the damping network to the system, both of the Schuler oscillation and the Foucault oscillation are
eliminated, and the precision of the SRINS is greatly enhanced; At the same time, by used of the mixed level damping,
which can not only reduce the effect of the vehicle power-driven to the precision of the system, but also avoid the limit
of the accurate reference velocity.
The paper deals with the structural topology optimization with fuzzy constraint. The optimal topology of structure is
defined as a material distribution problem. The objective is the weight of the structure. The multifrequency dynamic
loading is considered. The optimal topology design of the structure has to eliminate the danger of the resonance
vibration. The uncertainty of the loading is defined with help of fuzzy loading. Special fuzzy constraint is created from
exciting frequencies. Presented study is applicable in engineering and civil engineering. Example demonstrates the
This paper explores how to use cache to improve the performance of web system, designs multi-layer cache
strategies based on Seam, constructs Web caching system from four levels. This strategy can improve Web system
scalability, and reduce the load of the system.
This paper deals with the performance of standard 32 kb/s ADPCM. This performance is measured using signal-tonoise
ratio (SNR). Here, the new contribution is related with mathematical derivation of SNR for asynchronous tandem
ADPCM systems as given in section 5. Another contribution is study this performance using QAM modem signal with
different constellations. A computer simulation program has been developed and a number of simulation tests have been
carried out using QAM modem signal at 9.6 Kb/s with four types of constellations, rectangular, and (5,11), (4,12), (8,8)
circular. The results of testing asynchronous tandem ADPCM systems show that the performance degrades with
increasing the stages of ADPCM. Also, the results show that the performance with circular constellation is better than
The concentrated wind energy turbine is a new type of wind energy electric generator set which utilizes the rare wind
energy after having concentrated it. In order to handle the problem in control system of the concentrated wind energy
turbine, this article introduces a set of wind power testing platform based on dSPACE hardware-in-the-loop-simulation,
and the control principle about wind power is researched and analyzed based on this testing plat form. This experiment
result shows that our testing platform can test not only the whole running process, but also the fault protection function.
ORM (Object Role Modeling) has been used as an ontology modeling language to model domain ontologies. In order
to publish domain ontologies modeled in ORM on the Semantic Web, it needs to translate ORM models into OWL 2, the
latest standard Web Ontology Language. Several equivalent transformation methods for ORM model have been
considered and a series of mapping rules have been presented.
Multiobjectives linear programming (MOLP) is one of the most important models for decision-making experts.
When the parameters can be represented by fuzzy numbers, it would be more interesting. In this paper, we present an
approximate algorithm for solving the fuzzy multiobjectives linear programming (FMOLP), where the coefficients of
objective functions and constraints are fuzzy. This algorithm solved MOLP problem obtained after converting fuzzy
coefficients to fixed coefficients, by using the maximin method. A detailed description and analysis of the algorithm are
supplied, and an illustrative example is presented.
This study explores the application of Particle Swarm Optimization (PSO) for optimization of a cross-flow plate fin
heat exchanger. Minimization total annual cost is the target of optimization. Seven design parameters, namely, heat
exchanger length at hot and cold sides, fin height, fin frequency, fin thickness, fin-strip length and number of hot side
layers are selected as optimization variables. A case study from the literature proves the effectiveness of the proposed
algorithm in case of achieving more accurate results.
The paper describes possible design of the vehicle track computational model and basic testing procedure of the track
dynamic loading simulation. The proposed approach leads to an improvement of track vehicle course stability. The
computational model is built for MSC. ADAMS, AVT computational simulating system. Model, which is intended for
MSC computational system, is built from two basic parts. The first one is represented by geometrical part, while the
second one by contact computational part of the model. The aim of the simulating calculation consist in determination of
change influence of specific vehicle track constructive parameters on changes of examined qualities of the vehicle track
link and changes of track vehicle course stability. The work quantifies the influence of changes of track preloading
values on the demanded torque changes of driving sprocket. Further research possibilities and potential are also
The new technologies set the stage for Mobile Learning. In this paper, we explored a Mobile Teaching-Learning
pattern and its advantages. And then we modeled courses with Atom and Atom Publishing Protocol. Grounded on the
pattern and modeling, we implemented mobile learning client side with Apple technologies, which could achieve
anytime, anywhere learning. And at last, we discussed the application of our system.
The Hanavan's fifteen rigid multi-body human model was simplified into the six rigid multi-body model, and then
the six degrees of freedom The Kane dynamic model was set up. With the human body parameters and muscle
parameters, the six degrees of freedom Kane formula and restriction conditions were used to obtain the elder tumble
movement status when the feet were stopped suddenly. The initial rotation speed of each part of the body can be
calculated. After that, the tumble movement of elder was stimulated and the impact force from the ground surface was
The node mobility and limited energy are two main factors affecting the link stability in mobile ad hoc networks. This paper proposed an advanced routing protocol PLS-AOMDV (Prediction of Link Stability-AOMDV) based on AOMDV multi-path routing protocol, which periodically predicts link stability by taking both node mobility and energy consumption into consideration so as to choose the highest stability link to transmit data. The simulation results show that PLS-AOMDV can increase the packet delivery rate and the lifetime of the network evidently.
The rapid development of parallel computer systems, making parallel operating environment gradually mature and
widely used in scientific computing and research in many fields, thus parallel database of research becomes more and
more attention and research has become an important database field of study. This network-based parallel cluster of
characteristics and the current parallel computer system new trends, analyzes the network parallel clusters of
workstations, parallel database data skew problem in data distribution characteristics of the environment is proposed with
the ability to adapt to the dynamic data balanced distribution programs.
During the simulation process of real-time three-dimensional scene, the popular modeling software and the real-time
rendering platform are not compatible. The common solution is to create three-dimensional scene model by using
modeling software and then transform the format supported by rendering platform. This paper takes digital campus scene
simulation as an example, analyzes and solves the problems of surface loss; texture distortion and loss; model flicker and
so on during the transformation from 3Ds Max to MultiGen Creator. Besides, it proposes the optimization strategy of
model which is transformed. The operation results show that this strategy is a good solution to all kinds of problems
existing in transformation and it can speed up the rendering speed of the model.
Based on the theory of collaborated self-directed study and the strengths of modern education
technology, the study explores application of websites for collaborated self-directed college English
learning. It introduces the characteristics and functions of the website developed to assist college
English teaching in China. It also points out the problems currently existing among teachers and
students, and puts forward some suggestions and strategies for the improvement of the application of
Nowadays, the level of the practically used programs is often complex and of such a large scale so that it is not as
easy to analyze and debug them as one might expect. And it is quite difficult to diagnose attacks and find vulnerabilities
in such large-scale programs. Thus, dynamic program slicing becomes a popular and effective method for program
comprehension and debugging since it can reduce the analysis scope greatly and drop useless data that do not influence
the final result. Besides, most of existing dynamic slicing tools perform dynamic slicing in the source code level, but the
source code is not easy to obtain in practice. We believe that we do need some kinds of systems to help the users
understand binary programs. In this paper, we present an approach of diagnosing attacks using dynamic backward
program slicing based on binary executables, and provide a dynamic binary slicing tool named DBS to analyze binary
executables precisely and efficiently. It computes the set of instructions that may have affected or been affected by
slicing criterion set in certain location of the binary execution stream. This tool also can organize the slicing results by
function call graphs and control flow graphs clearly and hierarchically.
The growing proliferation of computer viruses becomes the lethal threat and research focus of the security of network
information. While new virus is emerging, the number of viruses is growing, virus classification increasing complex.
Virus naming because of agencies' capture time differences can not be unified. Although each agency has its own virus
database, the communication between each other lacks, or virus information is incomplete, or a small number of sample
information. This paper introduces the current construction status of the virus database at home and abroad, analyzes
how to standardize and complete description of virus characteristics, and then gives the information integrity, storage
security and manageable computer virus database design scheme.
Accurate point matching is a crucial and challenging process in feature-based image registration, especially for
images with a monotonous background, In this paper, we propose a robust point matching algorithm for image
registration which integrates cyclic string matching method and a two decision criteria, i. e., the stability and accuracy of
transformation error. In this algorithm, a filtering strategy is designed to eliminate dubious matches to get exactly
matched point sets. The performance of the proposed algorithm is evaluated by registering two typical image pairs
containing repetitive patterns. Compared with Random Sample Consensus (RANSAC), Graph Transformation Matching
(GTM), the proposed algorithm obtains the highest precision and stability.
In order to overcome the limitations of piecewise constant phenomenon and computational burden which exist in Markov Random Field (MRF) with pair wise neighborhood and traditional learning style respectively, this paper proposes a clustering learning method from natural image database, no filters included. By this method, we get the distributive law of the blocks abstracted from natural images. Furthermore, we also do the prior image modeling according to the learned law. And the real application in image restoration illustrates its effectiveness by comparison between high order MRF prior model and pair wise MRF prior model.
This is a hot spot a seamless panoramic view of the image smooth digital can form by some image study computer vision, image processing and computer graphics. According to the application of the system conditions, especially in limited system resources and some concrete integration technology solutions, and the implementation algorithm have been put forward, based on this M operator, wavelet transform the foundation and the algorithm and the feasible operation design consideration. At the same time, the effect, the joining together of boundary conditions and the basic factors of the classic algorithm is considered to be compromisingly application fields.
The large amount of entity data are continuously published on web pages. Extracting these entities automatically for
further application is very significant. Rule-based entity extraction method yields promising result, however, it is
labor-intensive and hard to be scalable. The paper proposes a web entity extraction method based on entity attribute
classification, which can avoid manual annotation of samples. First, web pages are segmented into different blocks by
algorithm Vision-based Page Segmentation (VIPS), and a binary classifier LibSVM is trained to retrieve the candidate
blocks which contain the entity contents. Second, the candidate blocks are partitioned into candidate items, and the
classifiers using LibSVM are performed for the attributes annotation of the items and then the annotation results are
aggregated into an entity. Results show that the proposed method performs well to extract agricultural supply and
demand entities from web pages.
Solve the problem of fiber grating measuring calibration pressure, temperature, dip Angle and other important parameters, it is a satisfactory solution to use high strength dielectric-coated metallic structure of the hollow fiber grating sensors too Hertz. Preliminary theory analysis and simulation and test results show that the absorption of polyethylene with smaller in the terahertz wave band an ideal choice to the terahertz hollow fiber membrane materials. Use of metal and metal structure dielectric-coated hollow fiber grave phase-shifted fiber grating, constitute a kind of fiber grating sensor calibration. Differential structure can be used to overcome the influence of the environment. Dielectric-coated metallic structure of the hollow fiber the coherent detection methods of obtaining high gain, phase-shifted fiber grating optical heterodyne method to detect frequency, use the frequency range is 1012 kHz, and the frequency resolution 1 KHz.
This paper proposes a simple, fast sports scene image segmentation method; a lot of work so far has been looking for a way to reduce the different shades of emotions in smooth area. A novel method of pretreatment, proposed the elimination of different shades feelings. Internal filling mechanism is used to change the pixels enclosed by the interest as interest pixels. For some test has achieved harvest sports scene images has been confirmed.
In the knowledge explosion, rapid development of information age, how quickly the user or users interested in useful
information for feedback to the user problem to be solved in this article. This paper based on data mining, association
rules to the model and classification model a combination of electronic books on the recommendation of the user's
neighboring users interested in e-books to target users. Introduced the e-book recommendation and the key technologies,
system implementation algorithms, and implementation process, was proved through experiments that this system can
help users quickly find the required e-books.
We present a general and effective projector calibration method using a ray-based generic model, which consists of
rays projected from all pixel elements of the projector. For computing the parameters of the rays, we propose a flexible
3D coordinates calculation method for the projected calibration target. Since the ray-based generic model does not rely
on any assumption, our approach is applicable to arbitrary projection system. The calibrated rays can be applied to
evaluate projector's actual distortion model, reconstruct 3D points of the scene and correct geometric distortion of the
projected image etc. Experiments are presented to verify the performance of the proposed technique.
The edge extraction is the key in the research on the application of the machine vision in the detection area. Classical edge detection algorithm's anti-noise performance is poor. The traditional morphological edge detection algorithm has good anti-noise performance, but it have a bad edge performance. In this paper, Multi-structure element morphology edge detection algorithm was used for the bottle mouth and the bottle body image edge detection.And, it was compared with classical edge detction and traditional morphological edge detection operator.The experimental results show that the gray-scale morphological edge detection algorithm is efficient, has strong anti-noise capability, improve the detection accuracy.
Co-design is a new trend in the social world which tries to capture different ideas in order to use the most
appropriate features for a system. In this paper, co-design of two information system methodologies is regarded; rapid
application development (RAD) and effective technical and human implementation of computer-based systems
(ETHICS). We tried to consider the characteristics of these methodologies to see the possibility of having a co-design or
combination of them for developing an information system. To reach this purpose, four different aspects of them are
analyzed: social or technical approach, user participation and user involvement, job satisfaction, and overcoming change
resistance. Finally, a case study using the quantitative method is analyzed in order to examine the possibility of co-design
using these factors. The paper concludes that RAD and ETHICS are appropriate to be co-designed and brings some
suggestions for the co-design.
There are many feature descriptors that are insensitive to geometric transformations such as rotation and scale
variation. However, most of them cannot effectively deal with blurred image which is a key problem in many real
applications. In this paper, we propose a new feature descriptor that combines SIFT descriptor with combined blur, scale
and rotation invariant Legendre moment (CBRSL). The proposed method inherits the advantage of SIFT and CBRSL
which leads to invariance for scale, rotation and blur degradation simultaneously. We also show how this new descriptor
is able to better represent the blur and geometric invariant feature descriptor in image registration. The experimental
results validate the effectiveness of our method which is superior to SIFT methods.
The Laplacian based matting methods are attracting a lot of attention due to their elegant and high quality closedform
solution. In this paper, we develop an alternative Laplacian construction for matting task by using local linear
learning model, and naturally derive its nonlinear extension by incorporating Kernel Ridge Regression algorithm. Our
Laplacian matrix construction approaches are based on the assumption that the alpha matte of each pixel point can be
reconstructed from its neighbors' alpha values in each of overlapping windows. In this way the induced Laplacians can
better exploit neighborhood intrinsic structure to constrain the propagation of foreground and background labels.
Experimental results demonstrate the proposed approaches produce very high accuracy matte values, of which our
nonlinear method even outperforms other Laplacian based matting methods on many test images.
Image processing algorithms and fuzzy logic method are used to design a visual tracking controller for mobile robot
navigation. In this paper, a wheeled mobile robot is equipped with a camera for detecting its task space. The grabbed
environmental images are treated using image recognition processing to obtain target's size and position. The images are
treated using input membership functions as the fuzzy logic controller input. The recognized target's size and position
are input into a fuzzy logic controller in which fuzzy rules are used for inference. The inference results are output to the
defuzzifier to obtain a physical control signal to control the mobile robot's movement. The velocity and direction of the
mobile robot are the output of fuzzy logic controller. The differences in velocities for two wheels are used to control the
robot's movement directions. The fuzzy logic controller outputs the control commands to drive the mobile robot to reach
a position 50cm front of the target location. The simulation results verify that the proposed FLC is effective in navigating
the mobile robot to track a moving target.
Edge detection technology of oil spills image on the sea is one of most key technologies to monitor oil spills on the
sea. This paper presents a new method to detect continuous and closed edges of oil slick infrared (IR) aerial images on
the sea. The method is composed of two stages: determination of edge points and edge linking. Non-maximal
suppression and self-adaptive dynamic block threshold (SADBT) algorithm are applied to determine edge points. Then
an improved edge linking algorithm is used for linking discrete edge points into closed edge contours, according to a cost
function of the combination of Euclidean distance, intensity and angle information of edge ending points to improve the
edge linking decision. Using the proposed algorithm, we can gain continuous and closed edges of oil slick IR aerial
images, thereby confirming the location and acreage of oil spill. The experiment results have shown that the proposed
method improves the degree of automation of edge detection, suppresses the striping noise, intensity inhomogeneity and
weak edge boundaries effectively.
The image may be partially blurred because of defocus, shaking camera or moving object. In this paper, we introduce a
novel method to extract the blurred area automatically, which mainly consists of two stages: coarse detection and fine
extraction. In the coarse detection, we proposed a block-based blurred/sharp area detection algorithm which roughly divides
the image into blurred, sharp and undefined blocks. Both the spatial gradient statistics and the frequential power spectrum are
used as blur metrics in the algorithm. For the fine extraction, we introduce an improved lazy snapping which takes blurred
and sharp blocks of the coarse detection as the seeds for automatic lazy snapping and thus extracts the blurred area
automatically. Experimental results prove that the efficiency of the proposed method.
Aiming at calibrating camera on site, where the lighting condition is hardly controlled and the quality of target
images would be declined when the angle between camera and target changes, an adaptive active target is designed and
the camera calibration approach based on the target is proposed. The active adaptive target in which LEDs are embedded
is flat, providing active feature point. Therefore the brightness of the feature point can be modified via adjusting the
electricity, judging from the threshold of image feature criteria. In order to extract features of the image accurately, the
concept of subpixel-precise thresholding is also proposed. It converts the discrete representation of the digital image to
continuous function by bilinear interpolation, and the sub-pixel contours are acquired by the intersection of the
continuous function and the appropriate selection of threshold. According to analysis of the relationship between the
features of the image and the brightness of the target, the area ratio of convex hulls and the grey value variance are
adopted as the criteria. Result of experiments revealed that the adaptive active target accommodates well to the changing
of the illumination in the environment, the camera calibration approach based on adaptive active target can obtain high
level of accuracy and fit perfectly for image targeting in various industrial sites.
Detecting roadside curb is a challenging research topic for stereo vision, as the curb is only 10-30cm higher than the
road surface and the difference between the curb and road surface is usually interfered by the noise in the disparity map.
In this paper, a roadside curb detection algorithm integrating the advantages of stereo vision and mono vision is
proposed. At first, the rough results are detected from the disparity varied curve and then sign filters are used to get quite
robust results. At last, the curb lines are estimated by using weighted Hough transform. Experimental results show that
this algorithm can detect the roadside curb fast and effectively.
In this paper, we developed a blowhole detection algorithm using texture analysis. We applied Gabor filter to extract defect candidates and used subsequently texture information to classify defect and pseudo-defect. To increase performance, size filtering and adaptive thresholding method were used. The proposed algorithm was tested on 343 images. The experimental result described in this paper shows that this algorithm was effective and suitable for blowhole detection in steel slabs.
Speech recognition is becoming popular in current development on mobile devices. Presumably, mobile devices have
limited computational power, memory size and battery life. In general, speech recognition is a heavy process that
required large sample data within each window. Fast Fourier Transform (FFT) is the most popular transform in speech
recognition. In addition, FFT operates in complex field with imaginary numbers. This paper proposes an approach based
on discrete orthonormal Tchebichef polynomials as a possible alternative to FFT. Discrete Tchebichef Transform (DTT)
shall be utilized here instead of FFT. The preliminary experimental result shows that speech recognition using DTT
produces a simpler and efficient transformation for speech recognition. The frequency formants using FFT and DTT
have been compared. The result showed that, they have produced relatively identical output in term of basic vowel and
consonant recognition. DTT has the potential to provide simpler computing with DTT coefficient real numbers only than
FFT on speech recognition.
An algorithm is proposed for traffic sign detection and identification based on color filtering, color segmentation and
neural networks. Traffic signs in Thailand are classified by color into four types: namely, prohibitory signs (red or blue),
general warning signs (yellow) and construction area warning signs (amber). A color filtering method is first used to
detect traffic signs and classify them by type. Then color segmentation methods adapted for each color type are used to
extract inner features, e.g., arrows, bars etc. Finally, neural networks trained to recognize signs in each color type are
used to identify any given traffic sign. Experiments show that the algorithm can improve the accuracy of traffic sign
detection and recognition for the traffic signs used in Thailand.
Support vector machine (SVM) is a new type of machine learning method based on statistical learning. It avoids the
neural network of some inherent disadvantages, such as the local minimum in the training process, structure, selection,
slow convergence speed problem, have very strong nonlinear system identification and generalization ability and small
sample. In this paper, we established multi-classifier based on binary tree model. Category rules are based on experience.
But this method cans classifier to come to less than optimal results. Our future work focused on how to find a category to
get the optimal rules of multi-classifier method, experience risk analysis, cluster analysis will consider further.