A frequently occurring interaction task in UAS video exploitation is the marking or selection of objects of interest in the
video. If an object of interest is visually detected by the image analyst, its selection/marking for further exploitation,
documentation and communication with the team is a necessary task. Today object selection is usually performed by
mouse interaction. As due to sensor motion all objects in the video move, object selection can be rather challenging,
especially if strong and fast and ego-motions are present, e.g., with small airborne sensor platforms. In addition to that,
objects of interest are sometimes too shortly visible to be selected by the analyst using mouse interaction. To address this
issue we propose an eye tracker as input device for object selection. As the eye tracker continuously provides the gaze
position of the analyst on the monitor, it is intuitive to use the gaze position for pointing at an object. The selection is
then actuated by pressing a button. We integrated this gaze-based “gaze + key press” object selection into Fraunhofer
IOSB's exploitation station ABUL using a Tobii X60 eye tracker and a standard keyboard for the button press.
Representing the object selections in a spatial relational database, ABUL enables the image analyst to efficiently query
the video data in a post processing step for selected objects of interest with respect to their geographical and other
properties. An experimental evaluation is presented, comparing gaze-based interaction with mouse interaction in the
context of object selection in UAS videos.