PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 11426, including the Title Page, Copyright information, and Table of Contents.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper studies the problem of real-time routing in a multi-autonomous robot enhanced network at the uncertain and vulnerable tactical edge. In practical harsh environment such as a battlefield, the uncertainty of social mobility and complexity of vulnerable environment due to unpredictable physical and cyber-attacks from the enemy would seriously affect the effectiveness and practicality of these emerging network protocols. This paper presents a GT-SaRE-MANET (Game-Theoretic Situation-aware Robot Enhanced Mobile Ad-hoc Network) routing protocol that adopts the online reinforcement learning technique to supervise the mobility of multi-robots as well as handle the uncertainty and potential physical and cyber-attack at the tactical edge. The proposed design can better support the virtual, augmented, and mixed reality technology in the future battlefield.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A fast and reliable decision is based on the availability of a precise data basis and expert knowledge. In order to address these aspects for resource planning, we provide an intuitive situation analysis in mixed reality with a seamless transition between conventional location planning and virtual inspection of the operational environment, which in addition to a better understanding of the terrain allows for location-independent cooperation of several users. The basic idea of the presented concept comprises the intuitive application preparation in mixed reality in a realistic 3D environment combined with an efficient 2D situation overview using e.g. a large display, tablet or smartphone. The aim is to enable transparent collaboration between different system environments regardless of the location of the respective users. The aim of the idea is an improved understanding of the terrain and increased situational awareness for fast and demand-oriented location planning. The solution is based on the three building blocks. First, a user-oriented display of high-resolution 3D geodata for a better understanding of the terrain on the basis of data standards, including necessary performance optimizations for use in mixed reality environments. Second, the combination of two-dimensional and three-dimensional display of operational pictures, which allows a choice of means through the synchronization of the information between the different platforms and thus a demand-oriented deployment planning. And third, the support of content-related cooperation of remote users for fast decision making via wired or mobile networks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Health monitoring of infrastructure systems is an important issue for public safety. Over the past decade, more structures are exhibiting signs of distress and the owners are required to perform periodic assessments of their assets. Visual inspection is the first approach employed for the assessment of a structure. The monitoring of distress evolution over time can serve as an estimator of a structure's structural performance. However, the data obtained during inspections are complex and hard to visualize on-site. Only after careful review of acquired data, the inspector can assess the condition of a structural component. The processing of field data may take several weeks before it yields any meaningful insight about the health of the structure. The procedure presented in this paper attempts to bridge this gap between the advancements of computer vision and on-site structure health monitoring, based on the utilization of Augmented Reality (AR) tools. More specifically, it includes the projection of holograms that present the reinforcement information obtained during structural inspections, data about the structural condition of the component, and 3D models including as-built details. The inspector can interact with the holograms using hand gestures. The holographic reinforcement visualization eliminates the time required to make a first assessment of a structure. Moreover, it increases efficiency and makes the inspection procedure safer, since the inspector doesn't have to carry any special equipment other than the holographic headset. Conventional approaches for the visual assessment of infrastructure systems are subjective, time consuming, and expensive to perform. Such AR systems can potentially decrease the time and cost of infrastructure inspections, by reducing the time required for post-processing and allowing the inspector to make educated estimations about the health of the structure in the field..
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Team communication is crucial in multi-domain operations (MDOs) that require teammates to collaborate on complex tasks synchronously in dynamic unknown environments. In order to enable effective communication in human-robot teams, the human teammate must have an intuitive interface that supports and satisfies the time-sensitive nature of the task for communicating information to and from their robot teammate. Augmented Reality (AR) technologies can provide just such an interface by providing a medium for both active and passive robot communication. In this paper we propose a new Virtual Reality (VR) based framework for authoring AR visualizations, and demonstrate the use of this framework to produce AR visualizations that help facilitate high task performance in synchronized, time-dominant human-robot teaming. In this paper we propose a new framework that uses a virtual reality (VR) simulation environment for developing AR strategies as well as present a AR solution for maximizing task performance in synchronized, time-dominant human-robot teaming. The framework utilizes a Unity-based VR simulator that is run from the first person point of view of the human teammate and overlays AR features to virtually imitate the use of an AR headset in human-robot teaming scenarios. Then, we introduce novel AR visualizations that support strategic communication within teams by collecting information from each teammate and presenting it to the other in order to influence their decision making. Our proposed design framework and AR solution has the potential to impact any domain in which humans conduct synchronized multi-domain operations alongside autonomous robots in austere environments, including search and rescue, environmental monitoring, and homeland defense.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Successful maintenance and development of underground infrastructures depends on the ability to access underground utilities efficiently. In general, obtaining accurate positions and conditions of subterranean utilities is not trivial due to inaccurate data records and occlusions that are common in densely populated urban areas. Limited access to underground resources poses challenges to underground utilities management. Ground-penetrating radar (GPR) is an effective sensing tools widely used for underground sensing. Combining high accuracy GPR data and augmented reality (AR) poses enables accurate real time visualizations of the buried objects. Although GPR and AR collect and visualize high accuracy data, intensive computation is required. This work presents a novel GPR-AR system that decreases post-processing time significantly while maintaining a neutral format across GPR-AR data collection methods regardless of varying Internet or GPS connection strengths. The methods explored in this work to mitigate failures of previous systems include automated and georeferenced post processing, the classification of underground assets using artificial intelligence, and real time data collection path visualizations. This work also lays a foundation for the potential combinations of a 5G GPR-AR system in which the temporal gap between data collection and visualization can be alleviated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As the dependence on IoT devices which support teams of humans and agents increases, it becomes increasingly difficult to provide information to teams in a multi-domain battle. Numerous challenges are present and there are several sources of data needed and utilized in these operations, including internet of things (IoT). Some of the critical challenges include the dependence and inter-dependence of IoT devices and the uncertainty of information (UoI) obtained from these devices. Uncertainty of information significantly affects the decision-making process and humans rely on underlying reasons for uncertainty in making decisions that rely on devices and data from these devices. In this paper, the novel method called the LRM (Lott, Raglin, and Metu) Method is utilized in the construction of various scenarios for decision making involving IoT devices. The LRM method incorporates several sources of uncertainty and their relationship to taxonomies deemed important to humans in support of military relevant conditions. Each scenario provides supporting information that the uncertainty of information significantly affects the decision making process when tasks are performed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Army anticipates that future battles will be in more complex and dynamic environments, requiring the Army to push modernization priorities. In order for Soldiers to thrive within these challenging operational contexts, they must rapidly adapt to leverage and integrate technology in order to gain and maintain overmatch over near peer adversaries. Teaming will be especially critical for mission success. Soldier teams will need to be adaptive and fluid in their roles to respond to dynamic mission demands. Technology can be leveraged to enable and enhance teaming between human and humanagent teams. Augmented reality (AR) technology may provide an adaptive solution for information sharing across individuals and teams to promote a common operational picture within future operational environments. Here, we present a small teams study where dyads leveraged technological tools that helped facilitate teaming during a simulated mission planning and rehearsal scenario. Partners worked together to plan a path to extract a high value target while avoiding obstacles and hostile forces. Dyads completed missions using two technologies counterbalanced across the study. The first condition was reflective of current methods for mission planning in the Army; dyads used a Table Top to plan, rehearse, and execute the simulated mission. In the second condition, dyads used the Microsoft HoloLens to complete the mission in an augmented reality environment. This paper will present findings of how perceived teaming efficacy and performance relate to mission performance and workload in the two technologies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Swedish National Forensic Centre, NFC, has been developing methods for 3D modeling of crime scenes since and methods for Virtual Reality (VR) visualizations since 2016. Documentation in 3D opens up new possibilities for visualization, documentation and forensic analysis and VR as well as Augmented Reality (AR) may within the near future become common practice in several forensic training situations. NFC has developed a proof of concept system for VR Crime scene reconstruction which has been tested by over two hundred individuals, both from law enforcement and from other fields, and the most common comment is that it is incredibly realistic reconstruction and that it is easy to understand how this can be of value within a crime scene investigation. The main limitations have been seen to lie within the 3D-modelling itself, creating close to perfect and realistic models takes a lot of time and effort, time that usually is not reasonable to add to an criminal investigation. However, for training purposes the payback of increased efficiency might be high enough to motivate the cost. For example being able to train in situations that are usually hard to recreate due to for example risk of injury or public safety reasons or being able to quickly switch between completely different environments without having to travel or make preparations. There is also a fundamental difference with learning from experience rather than from theory, and this is a main motivator behind trying to create an as immersive and realistic training experience as possible.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Participants used a helmet-mounted display (HMD) and a touchscreen monitor to detect targets across 3 different approaches to display 360º video for indirect vision display systems. A within-subject design was used with targets ranging from dismounted, mounted, and aerial targets. The number of targets detected, workload, and usability of each condition was measured. The HMD condition produced significantly more targets detected overall and within each type of target type compared to the monitor condition. HMD use also produced a significantly lower level of mental workload as measured by the NASA TLX and achieved a significantly higher level of usability compared to the other conditions. Possible reasons for these differences are discussed along with discussion on future studies using HMDs and mixed reality technology for indirect vision display systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Previous research demonstrates that AR could provide significant gains in time and accuracy for placement and object assembly tasks (Angelopoulos, 2018). The purpose of this research is extend these findings to the military repair and assembly domain by quantifiably evaluating whether the use of AR would be beneficial for maintainers operating the Reconfigurable Transportable Consolidated Automated Support System (RTCASS) system. We also examine whether any speed or accuracy benefits of AR guided operations depend on the level of maintainer experience.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Virtual Reality (VR) systems can improve the training of first responders and soldiers in multi-domain operations in a number of ways. Realistic simulation of physical objects, however, is challenging. The huge variety of equipment pieces and other objects specialists in first responder units - and in particular CBRN-troops - interact with further increases the effort. In this paper, we present a novel and flexible Mixed Reality (MR) training system for first responders that enables the integration of physical objects by using Augmented Virtuality (AV) and binary tracking". A Head Mounted Display (HMD) immerses the user in VR, while augmenting the visualization with 3D imagery of real objects, captured by an RGB-D sensor. In addition, a RFID-reader at the user's hand detects the presence or absence (binary response) of certain equipment items. Our proposed MR system fuses this information with data of an inertial motion capture suit to an approximate global object pose and distributes it. Our solution provides a wide range of options for physical object interaction and collaboration in a multi-user MR environment. In addition, we demonstrate the training capabilities of our proposed system with a multi-user training scenario, simulating a CBRN crisis. Results from our technical and quantitative user evaluation with 13 experts in CBRN response from the Austrian Armed Forces (National Defense Academy and Competence Center NBC Defense) indicate strong applicability and user acceptance. Over 80% of the participants found it easy or very easy to interact with physical objects and liked the multi-user training much or very much.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Rapid advances in immersive reality technologies have resulted in a vast quantity of research papers which generally include or attempt to apply it to human cognitive augmentation. Taking advantage of traditional psychological and physiological metrics for studying human cognition and behavior, researchers are applying various near-real time analysis techniques to modulate immersive experiences and influence the mental state of the user. Because of the variety of contributing sub-domains, there is little consensus as to any rigid paradigm for the knowledge being synthesized. In this paper, we conduct a systematic literature review to determine the state of the art of dynamic cognitive augmentation in immersive environments. Following a structured query of academic publications, we conduct an in-depth analysis of 104 papers from a sample of 538. We observed that roughly 66% of papers among this frontier apply methods best suited for exploratory purposes, limiting the overall extent to which conclusions can be drawn about immersive reality technology’s capability to augment human cognition. We further identify a pressing gap in the knowledge necessary for the effective application of immersive reality towards dynamic cognitive augmentation in practical industrial scenarios. We hope this work will influence academia, industry, and standards development organizations to extend the use of XR technology networked with biosensor-enabled intelligent cognitive assistants to enhance the effectiveness of hybrid human-machine systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Navigating a virtual environment (VE), while maintaining situational awareness, and learning the environment are important in military training. As head mounted displays (HMDs) and virtual reality (VR) become more prevalent in military training, it is important to investigate outcomes with different levels of immersive technologies and experience with VR. We conducted a within-subjects experiment (61 participants returned for all sessions), which measured selfreported VR experience and varied level of immersion (high, Oculus HMD; medium, NVIS HMD; low, Monitor) during VE navigation. After a task to find target objects, participants engaged in transfer tasks to determine what they learned from the VE. These tasks examined if items from the environment were recognized (yes/no and multiple choice identification), or generated from memory without cues, which suggests deeper processing (recall of target/non-target items). Level of immersion impacted recognition without interacting with previous VR experience, with high immersion performing significantly better than medium and low. Immersion impacted recall of target objects, with high recalling significantly more than medium. For incidental (non-target) recall, immersion did not have an impact, but previous VR experience resulted in significantly better recall of non-target objects. As this was found only for incidental recall, it suggests those with previous VR experience retained more information from the environment in general. The results suggest that immersion may have different impacts depending on type of information to be learned, and previous VR experience may improve performance on deeper learning tasks. These outcomes can be applied in the design of VEs and navigation tasks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Commercial augmented reality (AR) and mixed reality (MR) displays are designed for indoor nearfield gaming tasks. Conversely, outdoor warfighter tasks have a different set of needs for optimal performance, e.g. for aided target recognition (AiTR). The information display needs to be visible across a wide luminance range, from mesopic to photopic (0.001 to 100,000 cd/m2), including maxto- min luminance ratio exceeding 10,000-to-1 within a single scene (high dynamic range luminance, HDR). The information display also should not distract from other tasks, a difficult requirement because the saliency of the information display depends on its relationship to the HDR background texture. We suggest that a transparency-adjustable divisive display AR (ddAR) could achieve these luminance and saliency needs, with potentially less complexity and processing power than current additive displays. We report measurements of acuity to predict how such ddAR might affect low contrast visibility under gaze shifts, which often result in 10- or 100-fold changes in luminance. We developed an HDR display projection system with up to 100,000-to-1 luminance contrast ratio and assessed how luminance dynamics affect acuity to semi-transparent letters against a uniform background. Immediately following a luminance flash, visual acuity is unaffected at 20% letter contrast, and it is only weakly affected at 10% letter contrast (+0.10 and +0.12 LogMAR for flashes of 25× and 100× luminance). The resilience of low contrast letter acuity across luminance changes suggests that soft highlighting and ddAR could effectively convey information, to improve AiTR under real world luminance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the increasing adoption of mixed reality technology, it is crucial to identify and avoid displays that cause noxious effects among users, such as loss of balance or motion sickness. Towards this end, we examined the effects of sinusoidal modulations of viewpoint on standing posture. These modulations varied the position of the user’s viewpoint in a virtual environment (VE) over time along either the left-right or the forwards-backwards direction; each had a chosen amplitude and temporal frequency. We measured the resulting change in posture at the frequency of visual stimulation, the socalled steady-state visually evoked posture response (SSVEPR), and used a signal-to-noise ratio (SNR) method to assess SSVEPR strength. These posture responses are described well by sigmoid functions of viewpoint modulation amplitude, allowing one to estimate the lowest amplitude of the visual stimulus that generates a just-detectable posture response. Results suggest that the visuo-postural control system’s sensitivity to viewpoint modulation increases with the frequency of the stimulus. Results also suggest that there is a speed threshold for viewpoint movement that must be met or exceeded if a posture response is to be produced. The results are similar for both left-right and forwards-backwards modulations, and for conditions in which users either moved through the VE or were stationary in the VE while viewpoint was modulated. These results shed light on which features of visual self-motion stimuli drive postural responses.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Army aims to enhance Soldier Situational Awareness (SA) and performance through the development of technologies, including sensors to collect novel sources of information, AI-driven algorithms to integrate information sources and recommend courses of actions, and display technologies like augmented reality to communicate this information to Soldiers. However, the operational effectiveness of these technologies relies on Soldiers' ability to quickly and accurately perceive, interpret, and make decisions with the displayed information. How do we determine whether and when one visualization method is more beneficial than another? We propose a research approach that leverages foundational cognitive science to identify cognitively informed trade-off spaces for different visualization methods, given Soldier needs and the dynamics of battlefield and operational information across mission phases. This approach will also enable understanding of how variables such as stress and time pressure moderate these trade-off spaces. The goal is to derive recommendations for visualization design, including novel methods of visualization that align with and support both the abilities and limitations of human cognition. We use the context of spatial knowledge and navigation as an illustrative example, exploring how different visualizations may effectively accommodate or overcome phenomena of human decision-making (e.g., biases and heuristics) given dynamic and uncertain battlefield information. In this review paper, we discuss the types of tactical information used to perform specific tasks, the underlying cognitive processes supporting these tasks, and the implications that aspects of human cognition have for visualization methods, ultimately motivating the research approach above.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Decision-making is defined as a process resulting in the selection of a course of action from a number of alternatives based on variables that represent key considerations to the task. This is a complex process where the goal is to generate the “best" course of action given the data and knowledge obtained. As the use of intelligent systems increases, so too does the amount of data to be considered by human analysts and commanders. As the military looks toward integration of intelligent system like smart devices or internet of things, the devices and the data from these devices are important for decision making in highly dynamic situations. Of critical importance is the uncertainty of information associated with the data produced from such systems. Any uncertainty must be captured and communicated to aid the decision-making process. Our work focuses on how this process can be investigated to understand and analyze the impact of uncertainty for decision-making in multi-domain operational environments. We conducted user studies and present our results to discuss the presentation of uncertainty within the decision-making cycle for our tasks .
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Joint Session with Conferences 11413 and 11426: AI/ML and XR
One of the most significant challenges for the emerging operational environment addressed by Multi-Domain Operations (MDO) is the exchange of information between personnel in operating environments. Making in- formation available for leveraging at the appropriate echelon is essential for convergence, a key tenet of MDO. Emergent cross-reality (XR) technologies are poised to have a significant impact on the convergence of the in- formation environment. These powerful technologies present an opportunity to not only enhance the situational awareness of individuals at the local" tactical edge and the decision-maker at the global" mission command (C2), but to intensely and intricately bridge the information exchanged across all echelons. Complimentarily, the increasing use of autonomy in MDO, from autonomous robotic agents in the field to decision-making assistance for C2 operations, also holds great promise for human-autonomy teaming to improve performance at all echelon levels. Traditional research examines, at most, a small subset of these problems. Here, we envision a system that sees human-robot teams operating at the local edge communicating with human-autonomy teams at the global operations level. Both teams use a mixed reality (MR) system for visualization and interaction with a common operating picture (COP) to enhance situational awareness, sensing, and communication { but with highly different purposes and considerations. By creating a system that bridges across echelons, we are able to examine these considerations to determine their impact on information shared bi-directionally, between the global (C2) and local (tactical) levels, in order to understand and improve autonomous agents teamed with humans at both levels. We present a prototype system that includes an autonomous robot operating with a human teammate sharing sensory data and action plans with, and receiving commands and intelligence information from, a tactical operations team commanding from a remote location. We examine the challenges and considerations in creating such a system, and present initial findings.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An increasingly congested and contested space environment, coupled with a rapid rate of growth in data from space systems, has challenged mission operations centers with the difficult task of accurately making sense of the space environment. While automated systems process and optimize massive amounts of data, operators also need the ability to intuitively understand how their systems are impacted from a changing landscape. The analysis and comparison of courses of action (COA) for space mission planning are historically performed with tools that do not convey a clear shared common operating picture (COP) of possible scenarios and their implications, thus not effectively communicating the intent and planning guidance to accomplish an assigned mission. Modeling and simulation efforts are accomplished using a particular set of tools and the results are then transcribed to presentation views with limited fidelity. A better shared understanding of the COP can be achieved by collaborating in the development and analysis of COAs and visualizing the impacts and risks associated with potential COAs. Emerging augmented reality (AR) technologies offer the ability for multiple people to simultaneously visualize and interact with virtual constructs and information in a collaborative virtual environment (CVE). To facilitate effective COA analysis and wargaming, operational decision makers and space situational awareness (SSA) analysts can benefit from having a capability that allows them to collaboratively build, visualize and interact with space mission resource models in AR.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.