Video has been a game-changer in how US forces are able to find, track and defeat its adversaries.
With millions of minutes of video being generated from an increasing number of sensor platforms,
the DOD has stated that the rapid increase in video is overwhelming their analysts. The manpower
required to view and garner useable information from the flood of video is unaffordable, especially
in light of current fiscal restraints. "Search" within full-motion video has traditionally relied on
human tagging of content, and video metadata, to provision filtering and locate segments of interest,
in the context of analyst query. Our approach utilizes a novel machine-vision based approach to
index FMV, using object recognition & tracking, events and activities detection. This approach
enables FMV exploitation in real-time, as well as a forensic look-back within archives. This
approach can help get the most information out of video sensor collection, help focus the attention of
overburdened analysts form connections in activity over time and conserve national fiscal resources
in exploiting FMV.