This PDF file contains the front matter associated with SPIE Proceedings Volume 6718, including the Title Page, Copyright information, Table of Contents, Introduction (if any), and the Conference Committee listing.
This paper has proposed a novel and robust scheme of image focusing by introducing a new measure of focusing based on Orientation Code Matching.(hereinafter,OCM) A unique pencil-shape profile has been found through comparing the similarity between any patterns extracted at the same position within their own scenes. Based on this profile, a new evaluation function, named Complemental Pencil Volume, (hereinafter, CPV), is defined and calculated to represent local sharpness of images, either in or out of focus. Then through a designed local-scope searching algorithm, the maximum of CPV, which corresponds to the just focused image, is searching in this schema. The proposed method of focusing can be applied to cases of ill-condition with low contrast observations. Experiment results show that the OCM-based focusing is very robust to change of brightness and even more irregularities in the real imaging system, like dark condition and so on.
Application of visual servoing in microassembly area is being developed but still limited by small depth of field of
optical microscopes. There have been several research efforts to resolve this problem in robotics and optics area, but they
are still insufficient to be directly applied. In this paper, the concept of flexible depth of field extension, which is to use
an LC-SLM to alleviate the computational cost of the wave-front coding method and obtain the best image for a given
defocus of the object, and how to implement the system are explained, and it is applied to a microassembly setup for
target detection experiments. The results imply that a good microassembly strategy can be devised for fast and robust
performance in micro world.
This paper proposes a new method to estimate distortion for template matching. Matching object images from scenes to pre-obtained reference images is necassary for robot mobility, but matching error due to distortion of images ramains a problem. There are two measures of distortion: "scale" which represents the distortion of distance from an object to the robot determined using a camera, and "directional distortion" which represents the distortion of direction that is due to the relative postures and positions of objects to the mobile robot. The latter is considered to involve rotation around the vertical axis, the y-axis, of the image plane. The paper proposes the Orthogaonal Projection Distortion method (OPD) to estimate directional distortion and uses it with the image correction method to improve similarity. We demonstrate the effectiveness of the approach experimentally using actual images.
When specular surface is imaged on a group of CCD cells, the image usually suffers from saturation and blooming
problem. This problem is a serious obstacle to applying optical profiling methods based on structured light to specular
objects. In this paper, a phase-based profiling system combined with a spatial light modulator in the imaging part is
proposed to measure three dimensional shape of partially specular surfaces. The spatial light modulator prevents the
image sensor from being saturated. In this way, projected fringes are well imaged and their phase information is correctly
extracted. The system configuration and transmittance control scheme are explained in sequence. This idea is verified by
experimental results, in which phase information is successfully extracted from the areas which are not normally
measurable due to saturation.
Optical methods based on structured-light projection are typically used for three-dimensional (3D) surface-geometry
measurement as the methods are non-contacting. Single-line projection techniques are most commonly used; however,
they require a translation or rotation stage to scan the range-sensor head across the object surface. Phase-shifting fringe
projection full-field methods have been commonly used with projection of two to four fringe patterns during
measurement. Simultaneous projection of multiple structured-light lines is a practical alternative full-field method as it
offers the advantage of range image capture using a single image or simultaneously captured image pair. This paper
presents a multiple-line full-field laser-camera range sensor, calibrated using a novel 2-D image-plane to 3D-object-space
mapping. The sensor employs a 31 laser-line projector and two CCD cameras. To calibrate the system, 31 laser
lines were simultaneously projected onto a black plate with horizontal line markings of known spacing. Images were
acquired with the plate at 6 known depths over a 200 mm range of depth. After conical diffraction correction was applied
to the object-space coordinates for all points, a mapping of 2D image coordinates to the known 3D object-space
coordinates was carried out for each of the 31 laser projections using closed-form Bezier surface fitting. The mean of the
absolute value of the calibration errors were 0.010 mm and 0.010 mm in X, 0.598 and 0.465 mm in Y (vertical), and
0.190 mm and 0.189 mm in Z (depth), for the two cameras, respectively. The method has the advantage that no
knowledge of the geometry of the laser-camera setup is required and accurate alignment of the laser and camera is not
necessary. The calibration technique also accounts for any lens distortion from the low cost cameras.
The most important factor of autonomous mobile robot is to build a map for surrounding environment and estimate its localization. This paper proposes a real-time localization and map building method through 3-D reconstruction using scale invariant feature from single camera. Mobile robot attached monocular camera looking wall extracts scale invariant features in each image using SIFT(Scale Invariant Feature Transform) as it follows wall. Matching is carried out by the extracted features and matching feature map that is transformed into absolute coordinates using 3-D reconstruction of point and geometrical analysis of surrounding environment build, and store it map database. After finished feature map building, the robot finds some points matched with previous feature map and find its pose by affine parameter in real time. Position error of the proposed method was max. 6.2cm and angle error was within 5°.
In the past, all methods that understand deformation / fracture (D/F) characteristics have been limited on the
surface indirectly. D/F characteristics are affected by micro-scale structural features like air bubbles (pores),
cracks and micro-detects; therefore, they need to be analyzed inside. In this paper, we propose a system
which automatically obtains the corresponding relations between pre- and post-D/F pores. Our system enables
analyzing 3D, local, high-accuracy D/F characteristics. Experiments proved its effectiveness.
The growing demand for advanced micro-devices that integrate various sensors and actuators, e.g. for biomedical
applications, has created a strong need for assembly units that can meet high precision and manipulation requirements.
However, developing a sophisticated machine that can fulfill these requirements solves only a part of
the problem - having a skilled person that can program and operate the machine must also be addressed. The
user interface should provide sufficient information to perform any assembly operation, however, it should also
hide or abstract information that would distract the operator from the main task. Controlling the information
flow from/to the user and to/from the machine is performed by representing the real environment in a virtual
one. This additional layer of abstraction between the user and the machine is based on a standard virtual reality
(VR) approach. This paper demonstrates the integration of such a VR system into an existing microassembly
Tape substrate pattern of ultra-fine pitch circuit less than 10 micrometers in pattern width, is required to be inspected
through high resolution optics. In the process of picking out defects at the level of the critical dimension through image
processing, however, trivial blemishes formed by dust or micro particles may be detected simultaneously. This leads to
unnecessary work on the part of operators reviewing and verifying the additional detected points. To maximize the
efficiency of the inspection process, we need to identify and classify the defect candidates whether it is a real pattern
defect or simply a trivial blemish by dust. Since a real defect arising from under or over etching bears inherent features in
shape and brightness, it can thus be discriminated from other trivial blemishes. In this article, we propose an image
feature based defect classification method, where proper measures were obtained from a series of image analysis with
FFT. Based on the data collected from experiments, we devised a statistic model for classification.
This paper presents the methodology of urban area classification in high resolution satellite IKONOS imagery. The strategies
include building extraction by Bayesian theory and laplacian criterion, labeling and size filtering, intensity threshold and etc
which are applied to IKONOS image in tandem to make this algorithm as an effective strategy to save processing time and
improve robustness. To realize the strategy, First, vegetation are extracted in attend to green layer of RGB image then
buildings are detected by Bayesian decision theory in regard to laplacian probability density function, then shadows which
have low intensity are detected. In the next step a special intensity level is calculated as a threshold level to discern roads.
Finally open areas are extracted from remained of image as regions with low laplacian intensity and large size. Meanwhile
morphological operations are applied to remove redundant image's particles.
Experimental result indicates that this approach has a high efficiency especially in extraction of large roads and streets from
dense urban area IKNOS images.
The possibility to conceive a nanorobot propelled by flagellated magnetotactic bacteria is becoming a reality. But the
development of such complex systems requires the implementation of various functionalities, one of which being the
tracking of such devices with sufficient speed and accuracy. In this paper, we present an automated tracking system developed with modern computational and microscopy equipment designed to follow a bacterium through various swimming paths. The results obtained with such system are presented in order to asses the platform real-time performance in tracking MC-1 magnetotactic bacteria. This system is also used to record data related to the movement of the bacteria which may prove to be useful in other field of research besides nanorobotics.
Measurement of human motion is widely required for various applications, and a significant part of this task is to identify
motion in the process of human motion recognition. There are several application purposes of doing this research such as
in surveillance, entertainment, medical treatment and traffic applications as user interfaces that require the recognition of
different parts of human body to identify an action or a motion. The most challenging task in human motion recognition
is to achieve the ability and reliability of a motion capture system for tracking and recognizing dynamic movements,
because human body structure has many degrees of freedom. Many attempts for recognizing body actions have been
reported so far, in which gestural motions have to be measured by some sensors first, and the obtained data are processed
in a computer. This paper introduces the 3D motion analysis of human upper body using an optical motion capture
system for the purpose of gesture recognition. In this study, the image processing technique to track optical markers
attached at feature points of human body is introduced for constructing a human upper body model and estimating its
three dimensional motion.
Intelligent surveillance has become an important research issue due
to the high cost and low efficiency of human supervisors, and
machine intelligence is required to provide a solution for automated
event detection. In this paper we describe a real-time system that
has been used for detecting car park entries, using an adaptive
background learning algorithm and two indicators representing
activity and identity to overcome the difficulty of tracking
We propose a digital scorebook for football game which digitizes a football game video and presents it as an animation.
The proposed system consists of player position estimation from the game video, event selection interface
and player movement animation. Player position estimation allows for flexible movements and angles of the cameras
including zoom in and out, pan, tilt, and yaw. This reliable and robust estimation of the player movement
is based on image analysis by synthesis and Generalized Hough Transform(GHT). The operator can annotate
game scenes based on player movement data using event selection interface. The player movement is represented
by the animation whose characters have number or letter figures to emphasize the data. We demonstrate the
applicability of player position estimation and play annotation scheme via the character behaviors in animation.
Detection of skin region is one of the important process of the face detection. Skin region is usually detected by using
the skin color. However, the method has some problems which are influence of lighting condition and etc. The methods
are defined the skin color in visible optical band. The method should illuminate visible light in the dark place. Therefore
we paid attention to difference of the reflectance characteristic of materials. We proposed a skin detection method
directly by the reflection characteristic. In this paper, we propose a method to reduce influences of the environment light
by using one band pass filter. The usefulness is confirmed by a skin detection experiment.