Advancing the sustainable use and conservation of marine environments is urgent. Tons of debris including macro- and microplastics generated on land are entering the oceans, marine resources are decreasing, and many species are facing extinction. Though satellite remote sensing techniques are commonly used for global environmental monitoring, it is still difficult to detect small objects such as floating debris on the vast ocean surface, and the ecosystems deep in the oceans where light does not reach are unobservable. An autonomous monitoring system consisting of optimally controlled robots is required for acquiring spatiotemporally rich marine data. However, object detection in marine environments, which is a necessary function the robots should have for underwater and aerial monitoring, has not been extensively studied. Here, we argue that state-of-the-art deep-learning-based object detection works well for monitoring underwater ecosystems and marine debris. We found that by using the deep-learning object-detection algorithm YOLO v3, underwater sea life and debris floating on the ocean surface can be detected with mean average precision of 69.6% and 77.2%, respectively. We anticipate our results to be a starting point for developing tools for enabling safe and precise acquisition of marine data to elucidate and utilize this last frontier.
Agriculture is one of the most important markets in the world. For the agriculture production efficiency and cost reduction, the modern agriculture no longer exists in farm fields only, but expands quickly in information fields as well. The recent trend of agriculture is moving towards precision farming, which gives rise to great demands for IT supports. The future of precision agriculture is considered highly promising, and lots of solution packages will be developed to support farming activities during the entire farming cycle.
Efficient and safe facility maintenance has been a serious social problem due to the decline in labor force, facility deterioration over the years, and the rise of large-scale natural disasters. For electric power companies, maintaining and inspecting power equipment spread in wide areas is an important management issues to deal with. Identifying the electric poles that require maintenance is one of the essential inspection tasks. To identify the electric poles in an image, several methods focusing on their unique features such as color and shape have been proposed. However, this feature-based approach suffers from noise caused by shooting conditions. Another approach using a laser scanning technique requires high computational cost for handling the obtained point cloud data. We explored methods to efficiently detect the electric poles in a large number of images taken by a vehicle-mounted camera run in an urban area and its suburbs. Here, we show that a single shot MultiBox detector (SSD), which has been successfully used for object detection in an image, can be effectively applied to the task. We trained SSD models using around 600 supervised image data and evaluated the performance with 100 test images. In the evaluation, we examined whether pole-like objects such as telephone poles, traffic light poles, or trunks of trees can be distinguished from the electric poles. We also evaluated the influence of the background and exteriors attached to the pole. We found that the electric poles can be detected with an average precision (AP) of 72.2%. Our results demonstrate operational feasibility of the autonomous electric pole inspection system that implements a deep network based object detector.
Abnormal detection using cameras in UAV platform become more and more popular for operation and maintenance, in particularly for large-scale constructions like building, bridge etc. UAV-used detection system could be expected to reduce the cost, ensure the safety and provide stability for O&M on infrastructures. As imaging technology, Image registration and change detection method plays a central role in an abnormal detection system. Two key factors in this respect are needed to be improved. Firstly, due to the near-distance photographing and complex surface composition of structures, a robust plane-level matching method is significant to make high-precision image registration for the change detection. However, as many part of the surface of structures do not have enough feature points, it seems difficult to make a plane matching using homography transformation based on the correspondence feature points. Secondly, plane-level change detection have much noise in the border area because of homography transfer deviation and information redundancy. In order to solve these two problems, a robust method based on a combination of edge detection and geometry constraint is proposed to make plane-level registration and change detection noise reduction. For registration, making good use of pixel information in the border area, we expand the border area to extract each plane regardless of the number of feature points. And for noise reduction, we excise the border information to reduce the effect of information redundancy. Validation experiments were performed with several sets of image pairs. We succeed to extract planes in images with a 92% coverage and 91% precision while the number of noise is reduced as 30% as before for average. The evaluation shows that our proposed method is of high precision with high robustness for abnormal detection system.
Unmanned aerial vehicles (UAVs) are being used to reduce the cost and risk of facility inspections. For the power distribution companies, power line inspection for providing stable power supply is an important but costly task. It includes deterioration diagnosis, detection of foreign matter adhesion, and estimation of power line-tree conflict risk, all of which is currently performed visually on foot. In this study, we explore the methods of detection and visualization of a power line-tree conflict using aerial images taken by drones. To detect a power line-tree conflict, we should firstly recognize the power lines and trees in the aerial images in order to identify the “candidate” regions of the conflict, and secondly, estimate the actual positional relationship between them in 3D. However, as previous studies have shown, the detection of power lines in an image is a challenging task because they are very narrow and monochromatic, which results in difficulty in extracting features. This specific character of the power lines could also cause failure in 3D reconstruction, in which feature matching among images is necessary. Here, we show that convolutional neural networks (CNNs) can be effectively applied in recognition of power lines and trees in an image. We also found that in mapping the candidate region of conflict to a 3D model the power line position could be estimated by taking the pole height into account. This way, even if it is difficult to reconstruct the power line in 3D, a user can make the final decision about the conflict by checking the depth and/or the height directional relationship.
Position information of unmanned aerial vehicles (UAVs) and objects is important for inspections conducted with UAVs. The accuracy with which changes in object to be inspected are detected depends on the accuracy of the past object data being compared; therefore, accurate position recording is important. A global positioning system (GPS) is commonly used as a tool for estimating position, but its accuracy is sometimes insufficient. Therefore, other methods have been proposed, such as visual simultaneous localization and mapping (visual SLAM), which uses monocular camera data to reconstruct a 3D model of a scene and simultaneously estimates the trajectories of the camera using only photos or videos.
In visual SLAM, UAV position is estimated on the basis of stereo vision (localization), and 3D points are mapped on the basis of the estimated UAV position (mapping). Processing is implemented sequentially between localization and mapping. Finally, all the UAV positions are estimated and an integrated 3D map is created. For any given iteration in the sequential processing, there will be estimation error, but in the next iteration, the previous estimated position will be used as a base position regardless of this error. As a result, error accumulates until the UAV returns to a location it passed before. Our research aims to mitigate this problem. We propose two new methods.
(1) Accumulated error caused by local matching with sequential low-altitude images (i.e. close-up photos) is corrected with global-matching between low- and high-altitude images. To perform global-matching that is robust against error, we implemented a method wherein the expected matching areas are narrowed down on the basis of UAV position and barometric altimeter measurements.
(2) Under the assumption that absolute coordinates include axis-rotation error, we proposed an error-reduction method that minimizes the difference in the UAVs’ altitude between the visual SLAM and sensor (bolometer and thermometer) results.
The proposed methods reduced accumulated error by using high-altitude images and sensors. Our methods improve the accuracy of UAV- and object-position estimation.