PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 6719, including the Title Page, Copyright information, Table of Contents, Introduction (if any), and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper is concerned with camera switching during a visual servoing task using a multi-camera vision system.
The concept of dynamical sensor switching is introduced based on image-based and position-based Jacobian
transpose control architectures. Stability is discussed extending the Lyapunov-based proof of Kelly et al. to
switched system stability using a common Lyapunov function for ideal target and camera models and multiple
Lyapunov functions if parameter perturbations are present. Furthermore, an energy supervised switching scheme
is proposed as a novel generic extension to switched system visual servoing which significantly reduces the control
error only requiring local measurements of control error and system state. The contribution of this work are
stable switching visual servoing strategies which facilitate instantaneous adjustments of control performance and
dynamical device switches in case of task requirements or sensor breakdown. Further benefits are a possible
reduction of the pose error variance over the operating distance and avoidance of singularities resulting from
field of view limitations. The switching control schemes are illustrated by simulation studies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this work a Proportional Derivative (PD) image-based visual servoing scheme applied to planar robot manipulators with revolute joints is proposed. Damping is added at the joint level using the robot active joints. The proposed control law may be though as a velocity inner loop at the joint level implementing the derivative action and a visual outer loop at the task level performing the proportional action. Since it is assumed that velocity measurements are not available, velocity estimates are obtained from active joint position measurements using a linear filter. Another feature of the proposed approach is the fact that calibration procedures for the vision system are avoided since an image-based approach is adopted. Closed loop stability is studied using Lyapunov Stability Theory. Experimental results on a laboratory prototype validates the proposed approach, moreover, it is also experimentally shown that by using a vision system for measurement of the robot end effector, kinematics errors may be tolerated in contrast with control strategies making use of the direct kinematics where performance depends on the precise knowledge of the robot kinematics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Vision-based techniques used in automatic microassembly are limited by inherent problems such as small depth of focus
(DOF) and field of view (FOV). Microassembly operations need, however, initially to detect micro parts in a wide FOV
and large DOF yet to maintain high resolution for the final state. This paper proposes an active zooming control method
that enables adjustment of the FOV and DOF dynamically according to the position and focus measure of micro objects.
The proposed method is based on an artificial potential field with the capability to combine different kinds of constraints
such as the FOV, focus measure, and joint limits, into the system. The proposed method can ensure the microscopy to
maintain a wide FOV and large DOF initially, and high resolution at the end. Simulation and microassembly
experimental results are provided to verify feasibility of the proposed approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The image-based visual servoing would lead to image singularities that might cause control instabilities, and there exit
other constraints such as the object should remain in the camera field of view and avoid obstacles. This problem can be
solved by coupling path planning and image-based control. The trajectory is planned directly in the image space in our
strategy to avoid the 3D estimation of the object, which is required in the motion space based path planning method. In
the presented method, the initial path is given using the artificial potential field method without considering the
constraints and then genetic algorithm based method is used to check and modify the initial path. This method can
achieve satisfactory task while decrease the computation. The proposed method is used to align the micro peg and hole,
and the simulation results show that the object can reach its desired position accurately without violation these constrains.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a novel visual servoing technique for a 5-dof mobile manipulator with an eye-to-hand
camera configuration is introduced. The proposed technique can be categorized as an image based (or
2D) visual servoing using a fixed camera in conjunction with a conic mirror (aka, a Catadioptric
camera system) providing panoramic vision. Two fictitious landmarks mounted on robot's end-effector
along with their mirror reflections, when viewed by the camera, provide enough information for 3D
reasoning based on the four points viewed on the image plane. Instead of directly using the image
features associated with these four points, five new image features are chosen to make the image
Jacobian rank efficient. A dual estimation/control strategy based on Extended Kalman Filter (EKF) is
utilized to (1) estimate camera's intrinsic and extrinsic parameters, and (2) track the coordinates of the
landmarks and their reflections on the image plane. The relationship between the translational and
rotational velocity of a frame attached to the robot's end-effector and the rate of change of the
proposed image features are fully formulated. The robustness of the proposed technique in
translational and rotational servoing of a 5-dof holonomic mobile manipulator is illustrated through
computer simulations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the shipyard, the precision requirement of the error margin is less then ± 2mm for producing 20000 mm by
20000 mm sized panels. This paper proposes a measurement system and an error correction method using
several cameras and consecutive image data for a large scale panels to satisfy requested precision bounds. The
purpose of this paper is the error correction of a measurement data using the matching of four consecutive
camera image data that is built up using four CCD camera modules. This module consists of a CCD Camera,
rotary stage and rotary stage controller. The error correction method is established using the mid point of
direction vectors from each camera and a relation between the origin camera and others. The relation is
estimated using corresponding points between each image plane. A direction vector from each CCD camera is
measured using the change in the angle of rotary stage.
Especially, to measure the dimension of the shape efficiently, a structured target must be at a center on an
image plane. By the visual servoing, a target is moved to the center of the image plane. This means the motion
of the measurement system, the change of the angle according to orientation of the rotary stage, is controlled
by an image based feedback system.
An advantage in using this method is to be able to get the measurement accuracy. With this advantage, we
propose the error correction algorithm using four consecutive image data for the correction of the
measurement data error. In order to evaluate the proposed algorithm, experiments are performed in real
environment, shipbuilding process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Laser interferometry is widely used as a measuring system in many fields because of its high resolution and ability to
measure a broad area in real-time all at once. In conventional LASER interferometry, for example Out-of-plane
ESPI(Electronic Speckle Pattern Interferometry), In plane ESPI, Shearography and Holography, it uses PZT or other
components as a phase shift instrumentation to extract 3D deformation data, vibration mode and others. However, in
most cases PZT has some disadvantages, which include non-linear errors and limited time of use. In the present study, a
new type of LASER interferometry using a laser diode is proposed. Using LASER Diode Sinusoidal Phase Modulating
(LD-SPM) interferometry, the phase modulation can directly modulated by controlling the LASER Diode injection
current thereby eliminating the need for PZT and its components. This makes the interferometry more compact. This
paper reports on a new approach to the LD Modulating interferometry that involves four-bucket phase shift method. This
study proposes a four-bucket phase mapping algorithm, which developed to have a guaranteed application, to stabilize
the system in the field and to be a user-friendly GUI. In this paper, LD modulating interferometry had shown the theory
for LD wavelength modulation and sinusoidal phase modulation. Four-bucket phase mapping algorithm then introduced.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
When a motion control system tracks a fast moving target, the over-tune is the main part of the dynamic tracking
error. The speed delay compensation may be used to decrease the error but the stability is sacrificed. We put
forward the Differential Position Feedback control, and discuss its effects and control mechanism through
simulation. With transfer function identification, we find that the Differential Position Feedback(DPF) control is
based on internal model principle. The simulation results show that DPF can improve the tracking ability for the
fast moving target but lower the tracking precision at low frequency region. If it is combined with the dynamic
integral control, better tracking precision can be obtained.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A micromechanical actuator has been developed using materials with different coefficients of thermal expansion
combined in a sandwich cantilever structure. The actuator is thermally operated and could be used in different
applications including temperature controlled electrical switches and sensors. A novel application for the microactuator
is as the prime mover for a micro multipede. The latter is essentially a multicantilever array where each element can be
actuated in sequence to produce planar movements. The actuation is provided by micro heaters integrated into each
cantilever. This paper presents the design of an optical system for monitoring the movements of a micro multipede using
two methods. The first method, triangulation was adopted to measure the downward movement and flattening of the
cantilevers under load. The second method involved direct deflection measurement with an optical microscope. The
paper presents both simulation and test results for temperature and mechanical loading and the relationship between
electrical power and cantilever deflection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
All industrial tasks including deep submicron and nanotechnologies such as micro and nano electromechanical systems need high performance and resolution measurements. For this purpose in this paper, we describe an optical displacement sensor with sub-nanometric resolution and operating in long range. High resolution optical displacement sensors have small operating range. To overcome this problem, in the proposed sensor triangular grating is used to extend the range of the measurements. It is shown that the resolution of the sensor depends on the grating angle. We also observed that the best precision situation in the proposed structure corresponds to maximum angle of the grating which is hard from implementation point of view. In fact, there is a trade-off between achievable resolution of the proposed sensor and complexity of realization. The proposed sensor has near 24 pm resolution over the length of grating.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The VST (VLT Survey Telescope) is a 2.6m optical ground-based telescope to be installed in the Cerro Paranal
(Chile) observational station of the European Southern Observatory (ESO). It is a joint project of INAF-Osservatorio
Astronomico di Capodimonte, responsible of the telescope design and realization, and ESO,
responsible for the civil infrastructures and the daily operation of the instrument. The control system of the
telescope is by definition an opto-mechatronic system. It combines mechatronic and optical disciplines together
with the final aim to produce sharp images of star objects. Feedback control systems are partially based on
mechatronic conventional sensors like position transducers, but also optical feedbacks coming from two separate
technical CCD sensors are used to implement outer control loops for the compensation of optical aberrations e.g.
introduced by the gravity, by shape imperfections or flexures in the mirrors, by thermal effects, by a not perfect
alignment of the telescope axes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper design of a 2-D array of ultrasonic pressure detector for imaging purposes based on optical micro electromechanical systems (MEMS) is considered. The proposed detector includes a semiconductor plane and array of laser diodes and photodetectors around them in a specific arrangement. Semiconductor plane is deflected due to applied acoustic pressure. High resolution optical displacement sensor is used for deflection detection. A 2-D array of displacement detection is based on vertical cavity surface emitting laser diodes operation in infrared region and an array of photodetectors surrounding those. For displacement sensing operating in linear range, there is a simple relationship between displacement and acoustic pressure. High-precision pressure detection is made possible by high resolution displacement detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Invited Session 2: Intelligent Vision in Robotics and Its Applications
Nowadays many parts of shipbuilding process are automated, but the painting process is not, because of the difficulty of
automated on-line painting quality measurement, harsh painting environment and the difficulty of robot navigation.
However, the painting automation is necessary, because it can provide consistent performance of painting film thickness.
Furthermore, autonomous mobile robots are strongly required for flexible painting work. However, the main problem of
autonomous mobile robot's navigation is that there are many obstacles which are not expressed in the CAD data. To
overcome this problem, obstacle detection and recognition are necessary to avoid obstacles and painting work effectively.
Until now many object recognition algorithms have been studied, especially 2D object recognition methods using
intensity image have been widely studied. However, in our case environmental illumination does not exist, so these
methods cannot be used. To overcome this, to use 3D range data must be used, but the problem of using 3D range data is
high computational cost and long estimation time of recognition due to huge data base. In this paper, we propose a 3D
object recognition algorithm based on PCA (Principle Component Analysis) and NN (Neural Network). In the algorithm,
the novelty is that the measured 3D range data is transformed into intensity information, and then adopts the PCA and
NN algorithm for transformed intensity information to reduce the processing time and make the data easy to handle
which are disadvantages of previous researches of 3D object recognition. A set of experimental results are shown to
verify the effectiveness of the proposed algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In a conventional power assist system, an operation direction and power of a manipulator were adjusted when operator
looked at a state and position of an operation object. So, position of an object is acquired by a visual sensor, and not only
follows operation of human, but sets up a work route between an operation object and a manipulator. And an end-effector
is guided to an object so that an operator can operate manipulator smoothly to an object.
In this paper, the work path creation using the potential that suits an operator is proposed. A near work path is set up
according to an end-effector orbit that an operator draws, and by guiding to an object aims at the improvement in
maneuverability of a power assist.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper treats the navigation problem of a mobile robot based on vision information and ultrasonic data. In
our method, by calculating the optical flow on the images, the mobile robot can detect obstacles which exist
ahead of it, further avoiding the area of obstacles, it can make the optimal trajectory to the final goal. Then, in
order to generate the optimal trajectory, the distance between a mobile robot and obstacle is needed and then
is obtained by evaluating a function with ultrasonic information. Through some experiments, we show how our
proposed method is effective.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a novel three-Dimensional measurement approach of flexible cables
for factory automation appliations, such as cable handling, connecter insertion
without conflicts with cables by using robotic arms. The approach is based on
motion stereo with a vision sensor. Laser slit beams are irradiated and make
landmalks on the cables to solve stereo correspondence problem efficiently.
These landmark points and interpolated points having rich texture are tracked in
a image sequence, and reconstructed as the cable shape. For stable feature point
tracking, a robust texture matching method which is Orientation Code Matching and
tracking stability analysis are applied. In our experiments, arch-like cables have been
reconstructed with an uncertainty of 1.5 % by this method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Clone nursery plants production is one of the important applications of bio-technology. Most of the production processes of bio-production are highly automated, but the transplanting process of the small nursery plants cannot be automated because the figures of small nursery plants are not stable. In this research, a transplanting robot system for clone nursery plants production is under development. 3-D vision system using relative stereo method detects the shapes and positions of small nursery plants through transparent vessels. A force controlled robot picks up the plants and transplants into a vessels with artificial soil.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Insufficient vision information due to occlusion and low resolvability is one of the important issues in conventional
optical vision system that limits its application in micromanipulation and microassembly. The variable view imaging
system can help prevent these issues by changing optical system parameters such as spatial position, orientation and
focus plane. Its ability to achieve desired view of the target makes it particularly suitable for observing three dimensional
micro objects in micromanipulation and microassembly. In order to determine the tilt angle, pan angle and view position,
the kinematics of the variable view system was analyzed based on the ray tracing analysis with the help of the vector
refraction theory. This paper also shows its applicability in microassembly by demonstrating micro peg-in-hole insertion
task.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a nanohandling robot cell with flexible visual feedback designed to work inside an SEM's vacuum
chamber in order to support teleoperated and fully automated nanohandling. Rail-based robots position miniature video
microscopes that observe the handling from different angles and with different magnifications. Image processing
techniques can be used to recognize and track objects and three-dimensional information can be obtained by stereo
vision and by the microscope's focus. The feasibility and advantages of the CameraMan concept are analyzed by the
implementation of a robot cell prototype. A self-learning controller is used to control the non-linear parts of the system,
challenges for cooperatively controlling the multi-robot system are outlined and high-level automation is discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The atomic force microscope (AFM) has proven to be a valuable instrument for the characterization and manipulation of biological objects. When using the AFM as a nanomanipulation tool, two principal problems arise. First, when manipulating with the AFM, the manipulation process has to be performed in a blind way. This can partially be solved by using virtual imaging and force feedback techniques. A second, more challenging problem is caused by tip contamination and the selection of the AFM tip. If the same probe is used for manipulation and imaging, tip contamination can result in decreased image quality. Furthermore, requirements on both tip shape and material may vary for manipulation and imaging. Addressing both problems, an automated microrobot station is proposed, utilizing nanomanipulation robots equipped with self-sensing AFM tips (piezoresistive cantilevers) working in cooperation with a conventional AFM. The system will not only benefit from a decoupling of imaging and manipulation, it will also allow simultaneous measurements (electrical, mechanical and thermal conduction) in different points of the sample. Due to spatial uncertainties arising from thermal drift, hysteresis and creep afflicted actuators, the development of a control system for the cooperation of microrobot and AFM is challenging. Current research efforts towards a nanohandling robot station combining both an AFM cantilever equipped microrobot and an AFM are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.