The surveillance of large areas to ensure local security requires remote sensors with high temporal and spatial resolution. Captive balloons with infrared and visible sensors, like ALTAVE captive balloon system, can perform a long-term day–night surveillance and provide security of large areas by monitoring people and vehicles, but it is an exhaustive task for a human. In order to provide a more efficient and less arduous monitoring, a deep learning model was trained to detect people and vehicles in images from captive balloons infrared and visible sensors. Two databases containing about 700 images each, one for each sensor, were manually built. Two networks were fine-tuned from a pretrained faster region-based convolution neural network (R-CNN). The network reached accuracies of 87.1% for the infrared network and 86.1% for the visible one. Both networks were able to satisfactorily detect multiple objects in an image with a variety of angles, positions, types (for vehicles), scales, and even with some noise and overlap. Thus a faster R-CNN pretrained only in common RGB (red, green, and blue) images can be fine-tuned to work satisfactorily on visible remote sensing (RS) images and even on the infrared RS images.
You have requested a machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Neither SPIE nor the owners and publishers of the content make, and they explicitly disclaim, any express or implied representations or warranties of any kind, including, without limitation, representations and warranties as to the functionality of the translation feature or the accuracy or completeness of the translations.
Translations are not retained in our system. Your use of this feature and the translations is subject to all use restrictions contained in the Terms and Conditions of Use of the SPIE website.