In this paper, we propose an automated target recognition by using scale-invariant feature transform (SIFT) in PowerPC-based
infrared (IR) imaging system. An IR image can be acquired more feature values at night than in the daytime, but
visual image can be acquired more feature values in the daytime. IR-based object recognition puts application into digital
surveillance system because it exist some more feature values at night than in the daytime. Feature of IR image in its
system appears a little feature value in the daytime. It is not comprised within an effective feature values at a visual
image from an IR of the daytime. Proposed method consists of two stages. First, we must localize the interest point in
position and scale of moving objects. Second, we must build a description of the interest point and recognize moving
objects. Proposed method uses SIFT for an effective feature extraction in PowerPC-based IR imaging system. Proposed
SIFT method consists of scale space, extrema detection, orientation assignment, key point description, and feature
matching. SIFT descriptor sets up extensive range about 1.5 times than visual image when feature value of SIFT in IR
image is less than visual image. Because an object in IR image is analogized by field test that it exist more expanse form
than visual image. Therefore, proposed SIFT descriptor is constituted at more expanse term for a precise matching of
object. Based on experimental results, the proposed method is extracted object's feature values in PowerPC-based IR
imaging system, and the result is presented by experiment.
In this paper, we propose an image fusion for open and unknown environments using normalized mutual information
(NMI) in an infrared (IR) and visual vision system. Image fusion is a field of study of image processing, and it creates a
new image to extract information from various different sensors. And also it gets effective information for a special
object. This can get object types, sensitive characteristic, and information which it not to get characteristic of object from
a single sensor. Image fusion in multi-sensors is two advantages. First, multi-sensor image has inherent redundancy for
each sensor because it can be fused each image from a various multi-band sensor. Second, multi-sensor differs from a
single sensor because it is included information of each sensor and is separated information of object easily in real
environments. Proposed method consists of extraction and comparison of feature point, image registration, and pseudo
color for display. Extraction of feature point is stage which it looks for a similar feature points between each sensor.
Then, the extraction of a similar feature point uses a corner detector. A detected correspondence point from multi-sensor
is compared feature point by using NMI. An acquired image in multi-sensor needs an image registration between two
images. Because it needs transformation from reference image to a coordinated system of sensed image. And this
represents each coordinated system independently between two images. Image registration use transformation of H
matrix. Method for overlay between two images uses blending based on HSV. Based on experimental results, the
proposed method shows high precision for fused pseudo image in multi-sensor, and can be represented image
registration by using probability-based method.
In this paper, a new condition for the target is proposed to increase the robustness of the facet-based detection method
for zero-mean Gaussian noise. In the proposed algorithm, the pixels detected from the maximum extremum condition are
checked further to discern if they are false maximum points in the proposed scheme. The experimental results show that
the proposed algorithm is much more robust for zero-mean Gaussian noise than the conventional detection method.
In this paper, we propose suppression of fixed pattern noise (FPN) and compensation of soft defect for improvement of object tracking in cooled staring infrared focal plane array (IRFPA) imaging system. FPN appears an observable image which applies to non-uniformity compensation (NUC) by temperature. Soft defect appears glittering black and white point by characteristics of non-uniformity for IR detector by time. This problem is very important because it happen serious problem for object tracking as well as degradation for image quality. Signal processing architecture in cooled staring IRFPA imaging system consists of three tables: low, normal, high temperature for reference gain and offset values. Proposed method operates two offset tables for each table. This is method which operates six term of temperature on the whole. Proposed method of soft defect compensation consists of three stages: (1) separates sub-image for an image, (2) decides a motion distribution of object between each sub-image, (3) analyzes for statistical characteristic from each stationary fixed pixel. Based on experimental results, the proposed method shows an improved image which suppresses FPN by change of temperature distribution from an observational image in real-time.
Recently, multi-sensor image fusion systems and related applications have been widely investigated. In an image fusion
system, robust and accurate multi-modal image registration is essential. In the conventional method, the image registration
process starts with manually-pointed corresponding pairs in both sensored images. Using these corresponding pairs, a
transform matrix is initialized and refined through an optimization process. In this paper, we propose a new automatic
extraction method for such corresponding pairs. The Harris corner detector is employed to extract feature points in both
EO/IR images individually. Patches around the detected feature points are matched with a probabilistic criterion, mutual information
(MI), which is a preferred measure for image registration due to its robust and accurate performance. Simulation
results show that the proposed scheme has a low time complexity and extracts corresponding pairs well.
In this paper, intermediate view reconstruction (IVR) using adaptive disparity search algorithm (ASDA) is for realtime
3-dimensional (3D) processing proposed. The proposed algorithm can reduce processing time of disparity
estimation by selecting adaptive disparity search range. Also, the proposed algorithm can increase the quality of the 3D
imaging. That is, by adaptively predicting the mutual correlation between stereo images pair using the proposed
algorithm, the bandwidth of stereo input images pair can be compressed to the level of a conventional 2D image and a
predicted image also can be effectively reconstructed using a reference image and disparity vectors. From some
experiments, stereo sequences of 'Pot Plant' and 'IVO', it is shown that the proposed algorithm improves the PSNRs of a
reconstructed image to about 4.8 dB by comparing with that of conventional algorithms, and reduces the Synthesizing
time of a reconstructed image to about 7.02 sec by comparing with that of conventional algorithms.
Tracking deformable objects is very important in many applications such as surveillance, security and military. In this
paper, we implement one tracking scheme based on the block matching using PowerPC. We implement tracking
algorithm using information from Infrared (IR) sensor for object tracking. When an occlusion occurs, the proposed
algorithm predicts movements of an object using the historical tracking information and it can keep the object tracking.
Based on experimental results, the proposed system can reduce calculation time and track object under condition of
camera jitter and the occlusions.
KEYWORDS: 3D image processing, 3D acquisition, Image resolution, 3D image reconstruction, Image processing, Target detection, Resolution enhancement technologies, 3D modeling, Target recognition, Imaging systems
A computer-based integrated imaging system (CIIS) using normalized cross correlation (NCC) for resolution enhancement is proposed to extract accurate location data of 3-D objects. Elemental images (EI) of the target and reference objects are picked up by lenslet arrays, and then target and reference plane images with enhanced resolution are reconstructed at the output plane by using the CIIS technique. Through cross correlations between the reconstructed reference and target plane images, 3-D location data of the target objects in a scene can then be robustly extracted. As a result of our experiments, we see that the proposed correlation scheme provides good discrimination and detection performance for 3-D object recognition.
Heterogeneous camera based surveillance systems provide us with a more robust tracking of objects. To take advantage
of additional cameras, it is necessary to establish geometrical relationship between the cameras and relationship between
an object and a camera. This paper presents an algorithm that can track a non-rigid objects in real-time in the night watch
system which does not contain sufficient light. The proposed method adopted hierarchical active shape model(ASM) for
real-time tracking and adaptive landmark point assignment for reducing computational load at each level. Active Shape
Model is robust for tracking non-rigid objects and overcomes the occlusion, because it changes an average shape of an
object with trained contour information of an object. This proposed tracking algorithm uses the information from CCD
sensor for tracking objects in the day time, and uses the information form IR sensor for tracking objects in the night time.
When the perfect occlusion occurs, the proposed algorithm predicts movements of an object using the historical tracking
information and it can keep the object tracking. Through the results of this experiment, we found out that we can track an
object both day and night with an trained contour information of an object, and confirm that robust tracking can be done
in a part occlusion. Therefore, the proposed algorithm we will develop a real-time region alignment algorithm for a
heterogeneous camera-based surveillance system under a complex environment.
Target segmentation plays an important role in the entire target
tracking process. This process decides whether the current pixel
belongs to the target region or not. In the previous works, the
target region was extracted according to whether the intensity of
each pixel is larger than a certain value. But simple binarization
using one feature, i.e. intensity, can easily fail to track as
condition changes. In this paper, we employ more features such as
intensity, deviation over time duration, matching error, etc.
rather than intensity only and each feature is weighted by the
weighting logic, which compares the characteristics in the target
region with that in the background region. The weighting logic
gives a higher weight to the feature which has a large difference
between the target region and the background region. So the
proposed segmentation method can control the priority of features
adaptively and is robust to the condition changes of various
In this paper, we propose a personal verification method using 3D face information, infrared (IR), and speech to
improve the rate of single biometric authentication. False acceptance rate (FAR) and false rejection rate (FRR) have
been a fundamental bottleneck of real-time personal verification. Proposed method uses principal component analysis
(PCA) for face recognition and hidden markov model (HMM) for speech recognition based on stereo acquisition system
with IR imagery. 3D face information acquires face's depth and distance using a stereo system. The proposed system
consists of eye detection, facial pose direction estimation, and PCA modules. An IR image of the human face presents
its unique heat-signature and can be used for recognition. IR images use only for decision whether human face or not. It
also uses fuzzy logic for the final decision of personal verification. Based on experimental results, the proposed system
can reduce FAR which provides that the proposed method overcomes the limitation of single biometric system and
provides stable person authentication in real-time.
In this paper, we propose a method to give more autonomy to a mobile robot by providing vision sensors. The proposed autonomous mobile robot consists of vision, decision, and moving systems. The vision system is based on the stereo technology, which needs correspondence between a set of identical points in the left and the right images. Although mean square difference (MSD) is generally used for the measure of correspondence, it is prone to various types of error caused by: shades, color change, and repetitive texture, to name a few. To correct this error, the fourdirection method, which incorporates surrounding information into correspondence measuring, can be used to improve the accuracy of correspondence. Edge of object is first extracted from the Laplacian of Gaussian (LoG) filtered image and post-treatment is performed to eliminate remaining high-frequency noise. During the process to minimize the change of edge, an adaptive threshold value is applied. The extracted edge image is then segmented based on histogram, and it precisely scans candidate blocks for accurate extraction of object. Even if the mobile robot is guaranteed to move autonomously, it has to sublate meaningless movement. To this end, the target is set up for the robot to be able to move toward the designated target and the robot is made to perceive the target by using structural information. The decision system utilizes three-dimensional (3D) distance information extracted from stereo vision and enables dynamic movement to look for a target. Experimental results show average error of 1.25% in the distance estimation, 97% recognition rate of target objects, and 2.3% collision rate with obstacles.