Tactical behavior of UGVs, which is needed for successful autonomous off-road driving, can be in many cases achieved by covering most possible driving situations with a set of rules and switching into a "drive-me-away" semi-autonomous mode when no such rule exists. However, the unpredictable and rapidly changing nature of combat situations requires more intelligent tactical behavior that must be based on predictive situation awareness with ongoing scene understanding and fast autonomous decision making. The implementation of image understanding and active vision is possible in the form of biologically inspired Network-Symbolic models, which combine the power of Computational Intelligence with graph and diagrammatic representation of knowledge. A Network-Symbolic system converts image information into an "understandable" Network-Symbolic format, which is similar to relational knowledge models. The traditional linear bottom-up "segmentation-grouping-learning-recognition" approach cannot provide a reliable separation of an object from its background/clutter, while human vision unambiguously solves this problem. An Image/Video Analysis that is based on Network-Symbolic approach is a combination of recursive hierarchical bottom-up and top-down processes. Logic of visual scenes can be captured in the Network-Symbolic models and used for the reliable disambiguation of visual information, including object detection and identification. Such a system can better interpret images/video for situation awareness, target recognition, navigation and actions and seamlessly integrates into 4D/RCS architecture.