Algorithms for automatic semantic segmentation of the satellite images provide an effective approach for the generation of vector maps. Convolutional neural networks (CNN) have achieved the state-of-the-art quality of the output segmentation on the satellite images-to-semantic labels task. However, the generalization ability of such methods is not sufficient to process the satellite images that were captured in the different area or during the different season. Recently, the Generative Adversarial Networks (GAN) were introduced that can overcome the overfitting using the adversarial loss. This paper is focused on the development of the new GAN model for effective semantic segmentation of multispectral satellite images. The pix2pix<sup>1</sup> model is used as the starting point of the research. It is trained in the semi-supervised setting on the aligned pairs of images. The perceptual validation has demonstrated the high quality of the output labels. The evaluation on the independent test dataset has proved the robustness of GANs on the task of semantic segmentation of multispectral satellite images.
Thermal imaging cameras improve the situational awareness of pilots during the aircraft operation. Nowadays thermal sensors are readily available onboard as the part of the Enhanced Vision System (EVS). While video synthesized using 3D modeling (Synthetic Vision System, SVS) can be easily displayed on a Head-up Display (HUD) due to the presence of the area segmentation data, the projection of the EVS video on a HUD usually results in an image with large bright areas that partially obscure the cockpit view from the cabin crew. This paper is focused on the development of the ClearHUD algorithm for effective presentation of the EVS video on a HUD using the optical flow estimation. The ClearHUD algorithm is based on the optical flow estimation using the video from the SVS and the EVS. The difference of the optical flows is used to detect the obstacles. The areas of the detected obstacles are projected with high intensity, and the remaining regions are filtered using the segmentation from the SVS.<p> </p> The ClearHUD algorithm was implemented in a prototype software for testing using 3D modeling. The optical flow for the SVS is estimated using ray tracing. The optical flow for the EVS is estimated using FlowNet 2.0 convolutional neural network (CNN). The evaluation of the ClearHUD algorithm has proved that it provides a significant increase of brightness of obstacles and reduces the intensity of non-informative areas.
A presence of an accurate dataset is the key requirement for a successful development of an optical flow estimation algorithm. A large number of freely available optical flow datasets were developed in recent years and gave rise for many powerful algorithms. However most of the datasets include only images captured in the visible spectrum. This paper is focused on the creation of a multispectral optical flow dataset with an accurate ground truth. The generation of an accurate ground truth optical flow is a rather complex problem, as no device for error-free optical flow measurement was developed to date. Existing methods for ground truth optical flow estimation are based on hidden textures, 3D modelling or laser scanning. Such techniques are either work only with a synthetic optical flow or provide a sparse ground truth optical flow. In this paper a new photogrammetric method for generation of an accurate ground truth optical flow is proposed. The method combines the benefits of the accuracy and density of a synthetic optical flow datasets with the flexibility of laser scanning based techniques. A multispectral dataset including various image sequences was generated using the developed method. The dataset is freely available on the accompanying web site.
Accurate egomotion estimation is required for mobile robot navigation. Often the egomotion is estimated using optical flow algorithms. For an accurate estimation of optical flow most of modern algorithms require high memory resources and processor speed. However simple single-board computers that control the motion of the robot usually do not provide such resources. On the other hand, most of modern single-board computers are equipped with an embedded GPU that could be used in parallel with a CPU to improve the performance of the optical flow estimation algorithm. This paper presents a new Z-flow algorithm for efficient computation of an optical flow using an embedded GPU. The algorithm is based on the phase correlation optical flow estimation and provide a real-time performance on a low cost embedded GPU. The layered optical flow model is used. Layer segmentation is performed using graph-cut algorithm with a time derivative based energy function. Such approach makes the algorithm both fast and robust in low light and low texture conditions. The algorithm implementation for a Raspberry Pi Model B computer is discussed. For evaluation of the algorithm the computer was mounted on a Hercules mobile skied-steered robot equipped with a monocular camera. The evaluation was performed using a hardware-in-the-loop simulation and experiments with Hercules mobile robot. Also the algorithm was evaluated using KITTY Optical Flow 2015 dataset. The resulting endpoint error of the optical flow calculated with the developed algorithm was low enough for navigation of the robot along the desired trajectory.
Skid-steered robots are widely used as mobile platforms for machine vision systems. However it is hard to achieve a stable motion of such robots along desired trajectory due to an unpredictable wheel slip. It is possible to compensate the unpredictable wheel slip and stabilize the motion of the robot using visual odometry. This paper presents a fast optical flow based algorithm for estimation of instantaneous center of rotation, angular and longitudinal speed of the robot. The proposed algorithm is based on Horn–Schunck variational optical flow estimation method. The instantaneous center of rotation and motion of the robot is estimated by back projection of optical flow field to the ground surface. The developed algorithm was tested using skid-steered mobile robot. The robot is based on a mobile platform that includes two pairs of differential driven motors and a motor controller. Monocular visual odometry system consisting of a singleboard computer and a low cost webcam is mounted on the mobile platform. A state-space model of the robot was derived using standard black-box system identification. The input (commands) and the output (motion) were recorded using a dedicated external motion capture system. The obtained model was used to control the robot without visual odometry data. The paper is concluded with the algorithm quality estimation by comparison of the trajectories estimated by the algorithm with the data from motion capture system.