The Army plans to integrate artificial intelligence (AI)/machine learning (ML) and other intelligent decision-making aids into future dismounted Warfighter systems to augment situational awareness and target acquisition capabilities. However, due to the unique constraints of dismounted operations, successful implementation of intelligent decisionmaking aids in dismounted systems necessitates a human-in-the-loop approach, which includes the ability for the Warfighter to provide feedback to the autonomous system. Human-in-the-loop feedback can augment current machine learning techniques by reducing the size of datasets needed to train algorithms and allow algorithms to be flexible and adaptive to changing battlespace conditions. As such, research is required to define the bidirectional interactions between man and machine in this context, to optimize human-intelligent agent teaming for the dismounted Warfighter. In this paper, we focus on a specific application of dismounted Human-AI interaction to weapon mounted target acquisition (small arms fire control systems) and discuss issues pertaining to an important component of this optimization: how intelligent information is communicated to the end user. We consider how the intelligent information is presented to the Warfighter, and what underlying cognitive and perceptual processes can be leveraged to optimize teamed decision making. Such factors are critical to the successful implementation of human-in-the-loop AI in dismounted applications and ultimately the effectiveness of intelligent decision-making aids.