With the exponential growth of technology, future military operations will be comprised of not just ground operations but a multi-domain battlespace. Paramount to mission success will be the reliance on intelligent adaptive computational agents and effective human-agent teaming. An agent teammate can assist the Soldier with tasks that may be seen as physically difficult, cognitively fatiguing, or high risk. However, successful teaming is compromised when an agent lacks the attributes that contribute to effective human-human collaboration, such as knowledge about team-members’ work preferences or capabilities. One way to provide agents with a sense of team-member preferences or capabilities is to quantitatively characterize such preferences as a function of the job the human intends to perform. To address this, we analyzed a modified survey from the Army Research Institute that is commonly used to identify work-abilities variables in military personnel based on the service member’s Military Occupational Specialty (MOS). Using machine learning techniques, statistical comparisons are made in order to quantitatively assess populationaveraged responses that Soldiers from various MOS codes provided on an Army Abilities questionnaire. Similarities and differences across groupings of MOS codes can provide a set of observations that might be parametrized into a computational agent’s framework. The goal of this work is to identify MOS code related parameters that might be incorporated into a computational agent’s framework in the future development of flexibly adaptive agents for Soldieragent teams.
The Human-Assisted Machine Information Exploitation (HAMIE) investigation utilizes large-scale online data
collection for developing models of information-based problem solving (IBPS) behavior in a simulated time-critical
operational environment. These types of environments are characteristic of intelligence workflow processes conducted
during human-geo-political unrest situations when the ability to make the best decision at the right time ensures strategic
overmatch. The project takes a systems approach to Human Information Interaction (HII) by harnessing the expertise of
crowds to model the interaction of the information consumer and the information required to solve a problem at different
levels of system restrictiveness and decisional guidance. The design variables derived from Decision Support Systems
(DSS) research represent the experimental conditions in this online single-player against-the-clock game where the
player, acting in the role of an intelligence analyst, is tasked with a Commander’s Critical Information Requirement
(CCIR) in an information overload scenario. The player performs a sequence of three information processing tasks
(annotation, relation identification, and link diagram formation) with the assistance of ‘HAMIE the robot’ who offers
varying levels of information understanding dependent on question complexity. We provide preliminary results from a
pilot study conducted with Amazon Mechanical Turk (AMT) participants on the Volunteer Science scientific research
Modern military intelligence operations involves a deluge of information from a large number of sources. A data ranking
algorithm that enables the most valuable information to be reviewed first may improve timely and effective analysis.
This ranking is termed the value of information (VoI) and its calculation is a current area of research within the US
Army Research Laboratory (ARL). ARL has conducted an experiment to correlate the perceptions of subject matter
experts with the ARL VoI model and additionally to construct a cognitive model of the ranking process and the
amalgamation of supporting and conflicting information.