Proc. SPIE. 10611, MIPPR 2017: Remote Sensing Image Processing, Geographic Information Systems, and Other Applications
KEYWORDS: Principal component analysis, Detection and tracking algorithms, Cameras, Image segmentation, Digital cameras, Feature extraction, Ear, Manufacturing equipment, Space operations, RGB color model
In this paper, we proposed an image-based approach for automatic recognizing the flowering stage of maize. A modified HOG/SVM detection framework is first adopted to detect the ears of maize. Then, we use low-rank matrix recovery technology to precisely extract the ears at pixel level. At last, a new feature called color gradient histogram, as an indicator, is proposed to determine the flowering stage. Comparing experiment has been carried out to testify the validity of our method and the results indicate that our method can meet the demand for practical observation.
In this paper, we proposed an environmentally adaptive crop extraction method for agricultural automation using LAB Gaussian model and super-pixel segmentation. A Gaussian mixture model in LAB color space is introduced to describe the distribution of crop pixel to adapt to the outdoor environment and the super-pixel technique is applied for structure preserving. Comparing experiment show that our method outperforms the other commonly used extraction methods.
In this paper, we proposed an illumination-invariant crop extraction method based on specularity learning. Several useful contextual cues including object appearance and location inspired by recognition mechanism of human beings were introduced and integrated to machine learning architecture, generating a well-trained highlight region classifier. Combing with the Hue-intensity Look-up table and super-pixel techniques, the classifier gives the final extraction result. Comparing experiment confirmed the validity and feasibility of our method.
For image classification tasks, the region containing object which plays a decisive role is indefinite in both position and scale. In this case, it does not seem quite appropriate to use the spatial pyramid matching (SPM) approach directly. In this paper, we describe an approach for handling this problem based on region of interest (ROI) detection. It verifies the feasibility of using a state-of-the-art object detection algorithm to separate foreground and background for image classification. It first makes use of an object detection algorithm to separate an image into object and scene regions, and then constructs spatial histogram features for them separately based on SPM. Moreover, the detection score is used to rescore. Our contributions include: i) verify the feasibility of using a state-of-the-art object detection algorithm to separate foreground and background used for image classification; ii) a simple method, called <i>coarse object alignment matching</i>, is proposed for constructing histogram using the foreground and background provided by object localization. Experimental results demonstrate an obvious superiority of our approach over the standard SPM method, and it also outperforms many state-of-the-art methods for several categories.
Rice yield estimation is an important aspect in the agriculture research field. For the rice yield estimation, rice density is
one of its useful factors. In this paper, we propose a new method to automatically detect the rice density from the rice
transplanting stage to rice jointing stage. It devotes to detect rice planting density by image low-level features of the rice
image sequences taken in the fields. Moreover, a rice jointing stage automatic detection method is proposed so as to
terminate the rice density detection algorithm. The validities of the proposed rice density detection method and the rice
jointing stage automatic detection method are proved in the experiment.
Proc. SPIE. 8918, MIPPR 2013: Automatic Target Recognition and Navigation
KEYWORDS: Image processing algorithms and systems, Agriculture, Detection and tracking algorithms, Image segmentation, Image processing, Digital cameras, Medical imaging, Digital imaging, Meteorology, RGB color model
The automatic observation of the field crop attracts more and more attention recently. The use of image processing technology instead of the existing manual observation method can observe timely and manage consistently. It is the basis that extracting the wheat from the field wheat images. In order to improve accuracy of the wheat segmentation, a novel two-stage wheat image segmentation method is proposed. Training stage adjusts several key thresholds which will be used in segmentation stage to achieve the best segmentation results, and counts these thresholds. Segmentation stage compares the different values of color index to determine which class of each pixel is. To verify the superiority of the proposed algorithm, we compared our method with other crop segmentation methods. Experiment results shows that the proposed method has the best performance.
Cotton, as one of the four major economic crops, is of great significance to the development of the national economy. Monitoring cotton growth status by automatic image-based detection makes sense due to its low-cost, low-labor and the capability of continuous observations. However, little research has been done to improve close observation of different growth stages of field crops using digital cameras. Therefore, algorithms proposed by us were developed to detect the growth information and predict the starting date of cotton automatically. In this paper, we introduce an approach for automatic detecting five true-leaves stage, which is a critical growth stage of cotton. On account of the drawbacks caused by illumination and the complex background, we cannot use the global coverage as the unique standard of judgment. Consequently, we propose a new method to determine the five true-leaves stage through detecting the node number between the main stem and the side stems, based on the agricultural meteorological observation specification. The error of the results between the predicted starting date with the proposed algorithm and artificial observations is restricted to no more than one day.
In this paper, we propose a specularity-invariant crop extraction method using probabilistic super-pixel markov random field (MRF). Our method is based on the underlying rule that intensity change gradually between highlight areas and its neighboring non-highlight areas. This prior knowledge is embedded into the MRF-MAP framework by modeling the local and mutual evidences of nodes. The marginal probability of each node in the label field is then iteratively computed by Belief Propagation algorithm which leads to the final solution. Comparing experimental results show that our method outperforms the other commonly used extraction methods in yielding highest performance with the lowest standard deviation.
In this paper, we explored the application of computer vision technology to automatically detect the tasseling stage of maize. The commonly used HOG/SVM detection framework is chosen to recognize the ears of maize for determining the occurrence time of the stage. However, it cannot guarantee high precision rate. Thus, we proposed a new method called Spatio-temporal Saliency Mapping to highlight the ear while suppress the background, which significantly improve the detection performance. Comparing experiment has been carried out to testify the validity of our method and the results indicate that our method can meet the demand for practical observation.
This paper present an effective method for ship detection using optical flow and saliency methods from optical satellite images, which can be able to identify more than one ship targets in the complex dynamic sea background and succeeds to reduce the false positive rate compared to traditional methods. In this paper, moving targets in the image are highlighted through the classical optical flow method, and the dynamic waves are restrained by combining the state-of-art saliency method. We make the best of the low-level (size, color, etc.) and high-level (adjacent frames information, etc.) features of image, which can adapt to different dynamic background situation. Compared to existing method, experimental results demonstrate the robustness of the proposed method with high performance.
Computer vision technology has been increasingly used for automatically observing crop growth state, but as one of the
key parameters in the field of agro-meteorological observation, crop canopy height is still measured manually in the
actual observation process up to now. In order to automatically measure the height based on the forward-and-downward-looking
image in the existing monocular vision observation system, a novel method is proposed, that is, to measure the
canopy height indirectly by the solving algorithm for the actual height of the vertical objects (SAAH) with the help of the
intelligent sensor device. The experiment results verified the feasibility and validity of our method, and that the method
could meet the actual observation demand.