Real-time motion video analysis is a challenging and exhausting task for the human observer, particularly in safety and
security critical domains. Hence, customized video analysis systems providing functions for the analysis of subtasks like
motion detection or target tracking are welcome. While such automated algorithms relieve the human operators from
performing basic subtasks, they impose additional interaction duties on them. Prior work shows that, e.g., for interaction
with target tracking algorithms, a gaze-enhanced user interface is beneficial.
In this contribution, we present an investigation on interaction with an independent motion detection (IDM) algorithm.
Besides identifying an appropriate interaction technique for the user interface – again, we compare gaze-based and
traditional mouse-based interaction – we focus on the benefit an IDM algorithm might provide for an UAS video analyst.
In a pilot study, we exposed ten subjects to the task of moving target detection in UAS video data twice, once performing
with automatic support, once performing without it. We compare the two conditions considering performance in terms of
effectiveness (correct target selections). Additionally, we report perceived workload (measured using the NASA-TLX
questionnaire) and user satisfaction (measured using the ISO 9241-411 questionnaire).
The results show that a combination of gaze input and automated IDM algorithm provides valuable support for the
human observer, increasing the number of correct target selections up to 62% and reducing workload at the same time.