Over the five past years, the computer vision community has explored many different avenues of research for Automatic
Target Recognition. Noticeable advances have been made and we are now in the situation where large-scale evaluations
of ATR technologies have to be carried out, to determine what the limitations of the recently proposed methods are and
to determine the best directions for future works.
ROBIN, which is a project funded by the French Ministry of Defence and by the French Ministry of Research, has the
ambition of being a new reference for benchmarking ATR algorithms in operational contexts. This project, headed by
major companies and research centers involved in Computer Vision R&D in the field of Defense (Bertin Technologies,
CNES, ECA, DGA, EADS, INRIA, ONERA, MBDA, SAGEM, THALES) recently released a large dataset of several
thousands of hand-annotated infrared and RGB images of different targets in different situations.
Setting up an evaluation campaign requires us to define, accurately and carefully, sets of data (both for training ATR
algorithms and for their evaluation), tasks to be evaluated, and finally protocols and metrics for the evaluation. ROBIN
offers interesting contributions to each one of these three points.
This paper first describes, justifies and defines the set of functions used in the ROBIN competitions and relevant for
evaluating ATR algorithms (Detection, Localization, Recognition and Identification). It also defines the metrics and the
protocol used for evaluating these functions. In the second part of the paper, the results obtained by several state-of-the-art
algorithms on the SAGEM DS database (a subpart of ROBIN) are presented and discussed