A method for detecting an object's motion in images that suffer from camera shake or images with camera egomotion
is proposed. This approach is based on edge orientation codes and on the entropy calculated from a histogram of the edge
orientation codes. Here, entropy is extended to spatio-temporal entropy. We consider that the spatio-temporal entropy
calculated from time-series orientation codes can represent motion complexity, e.g., the motion of a pedestrian. Our
method can reject false positives caused by camera shake or background motion. Before the motion filtering, object
candidates are detected by a frame-subtraction-based method. After the filtering, over-detected candidates are evaluated
using the spatio-temporal entropy, and false positives are then rejected by a threshold. This method could reject 79 to 96
[%] of all false positives in road roller and escalator scenes. The motion filtering decreased the detection rate somewhat
because of motion coherency or small apparent motion of a target. In such cases, we need to introduce a tracking method
such as Particle Filter or Mean Shift Tracker. The running speed of our method is 32 to 46 ms per frame with a 160×120
pixel image on an Intel Pentium 4 CPU at 2.8 GHz. We think that this is fast enough for real-time detection. In addition,
our method can be used as pre-processing for classifiers based on support vector machines or Boosting.
This study aims to establish an error model of the stereo measurement system considering camera vibration.
At first, we verified the distribution of disparity error under the circumstance without the camera vibration and with the camera vibration. As the result, we found that we can approximate the distribution of disparity error by normal distribution under the circumstance without camera vibration and with camera vibration. And, the parameters of normal distribution are changed by the camera vibration.
The parameters of the distribution of the measurement error are average μ and standard deviation σ. The parameters of the camera vibration are considered amplitude A and frequency F. In order to verify relationships during the parameters of the distribution of measurement error and the parameters of the camera vibration, we experimented using the vibration testing system. We imposed simple harmonic motion to the stereo camera. In this paper, we use stereo camera Bumblebee. As the result of experiment, the camera vibration didn't affect average μ. We found positively correlation between standard deviation σ and amplitude A. And, we found negatively correlation between standard deviation σ and frequency F. We estimate the parameters of measurement error by the parameter of the camera vibration using these relationships. So, we establish the error model of the stereo measurement system. Moreover, we define existing probability of object using the parameter of measurement error.
Nighttime images of a scene from a surveillance camera have lower contrast and higher noise than their corresponding daytime images of the same scene due to low illumination. Denighting is an image enhancement method for improving nighttime images, so that they are closer to those that would have been taken during daytime. The method exploits the fact that background images of the same scene have been captured all day long with a much higher quality. We present several results of the enhancement of low quality nighttime images using denighting.
This paper proposes a novel focus measure based on self-matching methods. A unique pencil-shaped profile is identified by comparing the similarity between patterns extracted around their neighborhood in each scene. Based on this, a new criterion function, CPV, is defined to evaluate focused or defocused scenes. OCM is recommended due to its invariance with regards to contrasts. Experiments using a telecentric lens are implemented to demonstrate the efficiency of proposed measure. Comparing OCM-based focus measure with conventional focus measures shows that OCM-based CPV is robust against illuminations. Using this method, pan-focused images are composed and depth information is represented.
Image processing method that detects a particular moving object from an image by a fixed camera and tracking is
noticed in various fields and it is a very important subject. In this paper, we propose a moving object tracking method
that can cope with change of the area accompany the random walk movement of the moving object oneself and change
of the brightness arise from change of the environmental such as a masking or change of the illumination. Proposal
method can be robust processing for change of the illumination based on Orientation Code Matching that is
demonstrated that is robust for the masking or change of the illumination. And, using Motion Vector derived from a
continuity of the random walk model motion, under the condition that there are similar walk models, it can discriminate
the walk model and individually tracking. Through the some experiment, this paper inspects the effectiveness of our
This paper has mainly discussed about two problems, object focusing and depth measurement. First, we propose a novel and robust scheme of image focusing by introducing a new measure of focusing based on Orientation code matching. A new evaluation function, named Complemental pencil volume, CPV, is defined and calculated to represent local sharpness of images, either in or out of focus, by comparing the similarity between any patterns extracted at the same position within their own scenes. An identified and unique maximum or peek, which of ill-condition scenes with low contrast observations. Experiments show that the OCM-based focusing is very robust to change in brightness, and to even more irregularities in the real imaging system, like dark condition. Second, based on this robust focusing technique, we applied it to an image sequence of an object surface to measure the depth of profile. A simple plane object surface has been implemented to demonstrate the basic approach. The results showed the successful and precision depth measurement of this object.
Instead of tachometer-type velocity sensors, an effective method of estimating real-time velocities based on a robust image matching algorithm is proposed to measure real-time velocities of agrimotors or working machines, such as sprayers and harvesters, driven by them in real farm fields.
It should be precise even they have any slipping, and stable and robust to many ill-conditions occurred in the real world farm fields, where the robust and fast algorithm for image matching, Orientation code matching is effectively utilized.
A prototype system has been designed for real-time estimation of velocities of the agrimotors and then the effectiveness has been verified through many frames obtained from the real fields in various weather and ground conditions.
This paper aims to propose a fast image searching method from environmental observation images even in the presence of scale changes. A new scheme has been proposed for extracting feature areas as tags based on a robust image registration algorithm called Orientation code matching. Extracted tags are stored as template
images and utilized in tag searching. As the number of tags grows, the searching cost becomes a serious problem. Additionally, change in viewing positions cause scale change of an image and matching failure. In our scheme, richness in features is important for tag generation and the entropy is used to evaluate the diversity of edge
directions which are stable to scale change of the image. This characteristic contributes to limitation of searching area and reduction in calculation costs. Scaling factors are estimated by orientation code density which means the percentage of effective codes in fixed size tag areas. An estimated scaling factor is applied to matching a scale of template images to one of observation images. Some experiments are performed in order to compare computation time and verify effectiveness of estimated scaling factor using real scenes.
This paper aims to propose a new scheme for robust tagging for landmark definition in unknown circumstance using some qualitative evaluations based on Orientation Code representation and matching which has been proposed for robust image registration even in the presence of change in illumination and occlusion. Necessary characteristics for effective tags: richness, similarity, and uniqueness, are considered in order to design an algorithm for tag extraction. These qualitative considerations can be utilized to design simple and robust algorithm for tag definition in combination with the robust image registration algorithm.