A significant benefit from strenuous efforts to optimally man next-generation Navy combatants is the re-orientation of the manner in which work is accomplished within the command center. The necessary shift from direct control of weapon and sensor systems towards human supervision of semi-automated tasks has profound implications for the workstation design. Task-centered design has emerged as a key component in the development of human-system integration strategies to meet new warfighting requirements. This presentation discusses how new design concepts are enabling greater work efficiency and multi-tasking in command center workstations. Human-interface technology advances will impact team organization and work procedures: Intra- and inter-team coordination is critical to the success of future ships and battlegroups. The impact of task-centric design principles and task management features on team workload and future warfighting procedures is reviewed and extended to demonstrate support of mission health, readiness and resource concerns of higher echelon commanders.
As the United States Navy enters into an era of reduced manning, the role of the decision maker and that of automation must change in order to maintain an acceptable level of performance. In the past, the responsibility of information synthesis has typically fallen on the operator. This becomes problematic when there is a lack of systems integration (most often technologies are co-located but not integrated), thus causing the operator to process an undue amount of information when analyzing information across multiple systems. Reducing the number of operators without changing the way decisions are made would result in information overload, delayed/degraded decision-making, and increased errors/accidents. If we are to successfully take sailors off ships, we must consider decision making in a new manner. One way to address the situation is to provide the decision maker/operator with a Knowledge Management System (KMS), which reduces cognitive processing requirements on behalf of the operator. For example, decisions based on doctrine can be automated with little impact on the quality of the decision as long as the operator is informed of what actions have been taken (keeping the operator in the loop). This paper will address the definition of Knowledge, the need for a KMS, functional allocation of Knowledge processing, and how systems can be designed for Knowledge Management concepts.
The SPAWAR Systems Center San Diego is currently developing an advanced Multi-Modal Watchstation (MMWS); design concepts and software from this effort are intended for transition to future United States Navy surface combatants. The MMWS features multiple flat panel displays and several modes of user interaction, including voice input and output, natural language recognition, 3D audio, stylus and gestural inputs. In 1999, an extensive literature review was conducted on basic and applied research concerned with alerting and warning systems. After summarizing that literature, a human computer interaction (HCI) designer's guide was prepared to support the design of an attention allocation subsystem (AAS) for the MMWS. The resultant HCI guidelines are being applied in the design of a fully interactive AAS prototype. An overview of key findings from the literature review, a proposed design methodology with illustrative examples, and an assessment of progress made in implementing the HCI designers guide are presented.
This paper presents an overview AutoTutor, an intelligent tutoring system that engages in conversationally smooth dialogue with students. AutoTutor simulates the discourse patterns and dialogue moves of human tutors with modest tutoring expertise. In order to concretize the situation we begin with two short snapshots of AutoTutor in action. In one snapshot, involving an articulate and knowledgeable student, AutoTutor begins with a how question and then simply pumps for further information. In the other, with an inarticulate and less knowledgeable student AutoTutor begins with a why question and follows this with numerous hints and prompts until the topic is covered. The remainder of the paper describes the system's architecture, which is comprised of seven modules: a curriculum script, language extraction, speech act classification, latent semantic analysis, topic selection, dialogue move generation, and animated agent. AutoTutor responds to the learner in real time and runs of a single Pentium processor.
People in nearly ever occupational setting can provide examples of poor system design. The focus for this paper is on an analysis of design problems found in complex military command and control systems and the ways in which these types of problems can be avoided in future system design. The source of data for this analysis was a group of case studies of sixteen U.S. military systems written by officer-students at the Naval Postgraduate School, Monterey, CA. Systems analyzed span the four military services and include aircraft systems, communications systems, the M-16 rifle, a missile defense system, a message processing system, weapon systems, and decision support systems. Documented problems with system use were categorized according to the following measures of effectiveness: Performance, Safety, Usability, Reliability, Maintainability, Time and Cost to Train, and Workload. The number of problems encountered per system ranged from one to nine; the mean number of reported problems per system was 4.9 IEEE 1220-1998 includes a revised systems engineering approach with an increased emphasis on engineering the system for the human. Adhering to a user-centered design approach should have a positive impact on system design by significantly reducing the types of system problems described in this paper.
The only way to deal with the increased complexities of the future in command and control, the huge amounts of data available, reduced manpower and cost goals, and training in tactical operations is to follow a human centered design process. It is time we design the hardware/software system to support the people instead of asking the people to compensate for the hardware/software system. This will only be accomplished by institutionalizing an integrated human systems engineering process that fully accounts for every person in the system. Use of this process will be critical to future complex system designs and in particular to integrated command centers. In addition to engineers following the process, engineering environments must facilitate a human systems engineering approach. A human systems engineering process and a prototype engineering environment, the Human Centered Design Environment which is currently under development, are described.
The LOCATE layout analysis tool was used to analyze three preliminary configurations for the Integrated Command Environment (ICE) of a future USN platform. LOCATE develops a cost function reflecting the quality of all human-human and human-machine communications within a workspace. This proof- of-concept study showed little difference between the efficacy of the preliminary designs selected for comparison. This was thought to be due to the limitations of the study, which included the assumption of similar size for each layout and a lack of accurate measurement data for various objects in the designs, due largely to their notional nature. Based on these results, the USN offered an opportunity to conduct a LOCATE analysis using more appropriate assumptions. A standard crew was assumed, and subject matter experts agreed on the communications patterns for the analysis. Eight layouts were evaluated with the concepts of coordination and command factored into the analysis. Clear differences between the layouts emerged. The most promising design was refined further by the USN, and a working mock-up built for human-in-the-loop evaluation. LOCATE was applied to this configuration for comparison with the earlier analyses.
U.S. Navy ship acquisition programs such as DD 21 and CVNX are increasingly relying on top down requirements analysis (TDRA) to define and assess design approaches for workload and manpower reduction, and for ensuring required levels of human performance, reliability, safety, and quality of life at sea. The human systems integration (HSI) approach to TDRA begins with a function analysis which identifies the functions derived from the requirements in the Operational Requirements Document (ORD). The function analysis serves as the function baseline for the ship, and also supports the definition of RDT&E and Total Ownership Cost requirements. A mission analysis is then conducted to identify mission scenarios, again based on requirements in the ORD, and the Design Reference Mission (DRM). This is followed by a mission/function analysis which establishes the function requirements to successfully perform the ship's missions. Function requirements of major importance for HSI are information, performance, decision, and support requirements associated with each function. An allocation of functions defines the roles of humans and automation in performing the functions associated with a mission. Alternate design concepts, based on function allocation strategies, are then described, and task networks associated with the concepts are developed. Task network simulations are conducted to assess workloads and human performance capabilities associated with alternate concepts. An assessment of the affordability and risk associated with alternate concepts is performed, and manning estimates are developed for feasible design concepts.
The coordinated exploitation of modern communication, micro- sensor and computer technologies makes it possible to give global reach to our senses. Web-cameras for vision, web- microphones for hearing and web-'noses' for smelling, plus the abilities to sense many factors we cannot ordinarily perceive, are either available or will be soon. Applications include (1) determination of weather and environmental conditions on dense grids or over large areas, (2) monitoring of energy usage in buildings, (3) sensing the condition of hardware in electrical power distribution and information systems, (4) improving process control and other manufacturing, (5) development of intelligent terrestrial, marine, aeronautical and space transportation systems, (6) managing the continuum of routine security monitoring, diverse crises and military actions, and (7) medicine, notably the monitoring of the physiology and living conditions of individuals. Some of the emerging capabilities, such as the ability to measure remotely the conditions inside of people in real time, raise interesting social concerns centered on privacy issues. Methods for sensor data fusion and designs for human-computer interfaces are both crucial for the full realization of the potential of pervasive sensing. Computer-generated virtual reality, augmented with real-time sensor data, should be an effective means for presenting information from distributed sensors.
Atmospheric scattering of ultraviolet light is examined as a mechanism for short-range, non-line-of-sight (NLOS) communication between nodes in energy-constrained distributed sensor networks. The physics of scattering is discussed and modeled, and progress in the development of solid state sources and detectors is briefly summarized. The performance of a representative NLOS UV communication system is analyzed by means of a simulation model and compared to conventional RF systems in terms of covertness and transceiver power. A test bed for evaluating NLOS UV communication hardware and modulation schemes is described.
The tactical and urban warfare community has a significant need for up-close sensing capability along with increased sensing coverage. The recent availability of small, inexpensive, light weight, and low power sensing nodes is providing the means to cover an area with a distributed network of remote intelligent sensors. Utilizing local node information combined with information from neighboring nodes is a significant challenge. An even bigger challenge is making use of meta-knowledge and tasking a cluster of nodes to cooperate on a task specified at a very high level. This paper provides a survey of collaborative signal processing considerations and techniques that can be used with a distributed network of smart sensor nodes to improve target detection, identification, localization, and tracking. Factors for system design are discussed, along with an example application.
Networking protocols for distributed collaborative ad-hoc wireless sensing are constrained by requirements such as energy efficiency, scalability, and support for greater variations in topology than traditional fully wired or last- hop wireless (remote to base station) networks. In such a highly constrained and dynamic environment, conventional networking approaches are generally not adequate. A declarative approach to network configuration and organization appears to offer significant benefits. Declarative networking exploits application-supplied data descriptions to control network routing and resource allocation in such a way as to enhance energy efficiency and scalability. An implementation of this approach, called the Declarative Routing Protocol (DRP) has been developed as part of DARPA's Sensor Information Technology program. This paper introduces the concept of declarative networking and what distinguishes it from more conventional networking approaches, describes the Declarative Routing Protocol, and presents performance results from initial experiments.
Networks of distributed sensors offer the promise of persistent and inexpensive surveillance of critical areas, while posing problems in the design of algorithms for processing sensor data into useful information. Specific issues include combining raw data from multiple nodes while minimizing the use of RF links. Collaborative processing techniques that restrict communications to between nearby nodes are discussed. Techniques have been developed to build a consensus among sensor nodes about what is happening in the physical world through controlling collaboration between neighboring nodes and using neighborhood information to develop track information. These techniques are applied while minimizing power consumption through dynamic distribution of processing, approximate processing approaches, and efficient digital encoding of measurements.
To extend the operational envelope of helicopters, the benefits obtained from the usage of different sensors (and combination of sensors) on the one platform are being extensively assessed. Critical to the success of such approaches are image interpretability and resultant pilot workload. The particular case addressed by this paper is that where two sensor inputs are available: one from the visible (image intensified) band and one from the IR band. An adaptive image fusion processing scheme is proposed in which scene metrics are used to provide feedback control data. The use of scene metrics which are derivatives of the image fusion process is proposed in order to maximize system performance.
For years, systems designers following a traditional design process have made use of models of hardware and software. A human-centric design process imposes additional requirements and analyses on the designer, and we believe that additional types of models -- models of human performance -- are necessary to support this approach to design. Fortunately, there have been recent technological advances in our ability to model all aspects of human performance. This paper will describe three specific applications of human performance modeling that we are exploring to support the design of human- centric systems, such as future Navy ships. Specifically, this technology can be used to generate team design concepts, to provide human-centric decision support for systems engineers, and to allow simulation-based evaluation of human performance. We believe that human performance modeling technology has matured to the point where it can play a significant role in the human-centric design process, reducing both cost and risk.
The design of new Navy ships that can be effectively manned with dramatically fewer sailors raises can be supported by a digital design environment. In this paper we will describe an effort currently underway to develop a software agent, the Executive Advisor, that can reason about human factors information accessible within the design environment. It will be able to alert and advise the engineering design team regarding human factors issues and analyses appropriate to the current stage of the system design process. The agent is being built with a cognitive modeling tool called iGENTM, in order to (1) give advice about human factors issues and analyses appropriate to the current stage of design and design team activity, (2) reason about when to deliver and how to filter advice, (3) explain its advice, and (4) monitor changes in the design environment and reason about their impact. The structure of the agent and the issues in creating it will be described.
Display design for command centers often focuses on the best methods to display information. An even more fundamental issue, however, is the design of the 'information space' for the command center, defined as the set of information that is needed by each individual in order to perform their individual and team tasks. Instead of being systematically designed, the information infrastructure and physical layout of command centers often evolve over time as new technologies are added, new positions created, and new connections and communication links are established within and between command nodes. This paper presents a systematic, quantitative method for designing the information space for a command center to best support team performance, based on the communication and information structure of the team. Using information about the team's communication patterns and information needs, we apply model- based principles to generate candidate designs for the physical layout of the command center and to develop designs best suited to the team structure.
There are a variety of problems occurring over the life cycle of an Integrated Command Environment (ICE) that can be addressed with a common approach, i.e. cognitive modeling. Cognitive models are a special type of intelligent agent constructed not only to behave in an intelligent fashion, but also to simulate human behavior. When simulating human behavior, a wide range of simulation realism is possible. A good balance between realism vs. minimum modeling effort and the most efficient CPU time usage should be sought when developing models for a particular purpose and domain. Thus, the level of perceptual and motor detail represented in a cognitive model should be scaled based on the ICE activity being supported. This paper discusses the level of realism needed at different stages in the life cycle of an ICE, and presents improvements to the existing COGNET cognitive modeling framework that support ICE modeling.
An integrated engineering environment is described wherein multidisciplinary capabilities are provided for addressing the role of humans in complex systems. The integration of these capabilities is based on the implementation of a common system design repository in a client-server architecture. The approach is based on the definition of an integrated process to support the engineering of large, complex systems which include humans as operators and decision makers. Such a process is by necessity multidisciplinary in that diverse specialties are required in order to achieve a well-balanced design. The implementation and execution of the process requires a collaborative approach across the various disciplines in order to define desired human roles in the system and to trade off between human and non-human implementation of system functionality. To enable the collaborative process and to manage the complexity of the evolving design, an integrated design environment is defined. The capabilities of the integrated environment support both system level and human engineering level design activities. A system data base schema is also defined to enable the implementation of a common design repository for integrated product and process management and to assure interoperability of the associated tools within the environment.
Command environments have rarely been able to easily accommodate rapid changes in technology and mission. Yet, command personnel, by their selection criteria, experience, and very nature, tend to be extremely adaptive and flexible, and able to learn new missions and address new challenges fairly easily. Instead, the hardware and software components of the systems do no provide the needed flexibility and scalability for command personnel. How do we solve this problem? In order to even dream of keeping pace with a rapidly changing world, we must begin to think differently about the command environment and its systems. What is the correct definition of the integrated command environment system? What types of tasks must be performed in this environment, and how might they change in the next five to twenty-five years? How should the command environment be developed, maintained, and evolved to provide needed flexibility and scalability? The issues and concepts to be considered as new Integrated Command/Control Environments (ICEs) are designed following a human-centered process. A futuristic model, the Dream Integrated Command Environment (DICE) will be described which demonstrates specific ICE innovations. The major paradigm shift required to be able to think differently about this problem is to center the DICE around the command personnel from its inception. Conference participants may not agree with every concept or idea presented, but will hopefully come away with a clear understanding that to radically improve future systems, designers must focus on the end users.
As we consider the issues associated with the development of an Integrated Command Environment (ICE), we must obviously consider the rich history in the development of control rooms, operations centers, information centers, dispatch offices, and other command and control environments. This paper considers the historical perspective of control environments from the industrial revolution through the information revolution, and examines the historical influences and the implications that that has for us today. Environments to be considered are military command and control spaces, emergency response centers, medical response centers, nuclear reactor control rooms, and operations centers. Historical 'lessons learned' from the development and evolution of these environments will be examined to determine valuable models to use, and those to be avoided. What are the pitfalls? What are the assumptions that drive the environment design? Three case histories will be presented, examining (1) the control room of the Three Mile Island power plant, (2) the redesign of the US Naval Space Command operations center, and (3) a testbed for an ICE aboard a naval surface combatant.
This project has addressed the questions of how to affect change, how to monitor the impact of change, tweak a system over time, and how to re-engineer a system so that it best supports the overall mission and goals. Via a review of relevant literature and a series of interviews with people who have played an instrumental role in previous re-engineering efforts, we have gathered lessons learned from 20 different re-engineering efforts. The primary finding was that we were able to find examples of successful staffing reduction, but only when the goal was to improve performance by cutting unnecessary team members. We did not find examples where technology insertion was effective in reducing staff size.
Following a human-centric design process is likely to introduce significant changes in both the technology and the organizational structure of an integrated command system. Rigorously and systematically evaluating the impact of these changes on team performance is a complex, multidimensional task. The first step is to define the anticipated effects of the advanced technologies and reorganization on human performance. Then, both measurement instruments and a testing scenario must be designed. The testing scenario must elicit the behaviors of interest within the bounds of a tactically realistic environment, and the measurement instruments must capture those behaviors. Finally, an appropriate point of comparison must be established, typically by measuring team performance in the current operating environment. In this paper, we will elaborate on this general approach and framework for evaluating the impact of a human-centric design process within the context of ongoing work evaluating the human-centric redesign of a Navy Surface Combatant Air Defense Warfare system.
The Integrated Command Environment (ICE) project is an initiative to consider the possibilities for an innovative naval command center where control of a ship's systems (e.g., weapons, navigation, damage control) can be centrally maintained and communication between the operators controlling these systems can be optimized. There is currently an ICE Lab at the Naval Surface Warfare Center Dahlgren Division with eight watchstations surrounded by ten large screen displays (LSDs). The many opportunities afforded by the LSDs require that the design for their control and visibility issues be carefully considered to maximize benefits to the operators and prevent increases in operator workload or confusion. Important issues in the control of LSDs include who should have the authority to change the information being displayed and how those changes should be implemented, manually or automatically. Visibility assessment takes into account the physical capabilities and limitations of the operator and includes issues such as head rotation, viewing angle, and character height. This paper will discuss the processes used and results obtained when analyzing the control and visibility of the current ICE LSD configuration, along with recommendations for designers of similar control systems.
In the present work is presented an Advanced Surface Movement Guidance and Control Systems (A-SMGCS). NOVA Radar Processing and Display Systems provide comprehensive traffic display for approach, tower and surface movement controllers. The NOVA enhanced Surface Movement Radar System forms the hearth of an advanced integrated surface movement guidance and control system (A-SMGCS). The NOVA 9000 Series has been developed in cooperation with ATC authorities, organizations and individual controllers to suit airport operational and safety requirements. It employs an evolutionary design process to permit system expansion and upgrades at low risk. NOVA 9000 is based on well-proven commercial off-the-shelf (COTS) technology. An Open System architecture is employed to ease implementation and communication with other systems, and to permit the system to be tailor-made to the operational requirements of different users. This open-architecture approach provides a modular and scaleable system designed for easy expansion or modification for future enhancements as the airport's infrastructure develops. NOVA 9000's Basic Building Blocks include: (1) Radar Analyzer and Compressor (RANC), (2) Surveillance Data Server (SDS), (3) Runway Incursion Monitoring and Conflict Alert System (RIMCAS), (4) Controller Working Position (CWP), (5) Recording and Playback System (RPS), (6) Technical Control And Monitoring System (TECAMS).
Naval vessels in the 21st century will be staffed by significantly fewer crew members, yet technological complexity will increase. As reported in independent assessments by the Naval Safety Center, U.S. Coast Guard, and the International Maritime Organization, human error is the direct cause of up to 80% of ship accidents. A major concern for the reduced manning of ships, is that the remaining ship personnel maintain required levels of operational readiness and effectiveness across all mission conditions. This includes the assurance that the fewer personnel manning Navy ships will be fully capable of performing required tasks in varied conditions, and will commit fewer errors and cause fewer accidents as compared to present-day ships. The key to successful operation rests with the ship and tactical commanders, who will be charged with quickly and accurately determining the readiness and effectiveness of ship's crew at any time. A concept for a modeling and simulation tool has been developed and designated SCORE (Simulation for Crew Operational Readiness and Effectiveness). The thrust of SCORE is to aid ship's commanders to determine the optimal utilization of available and qualified crew and to reduce workloads and enhance human performance in the context of ship missions. The focus of this presentation will be to describe SCORE and the human factors engineering issues associated with the design of decision aiding tools for command center personnel. In particular, issues such as modes of decision support, workload simulation, and situation awareness will be reviewed and topics for future investigation will be offered.