PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Mark S. Dennison Jr.,1 David M. Krum,2 John (Jack) N. Sanders-Reed,3 Jarvis (Trey) J. Arthur III4
1U.S. Army Research Lab. (United States) 2California State Univ., Los Angeles (United States) 3Image & Video Exploitation Technologies, LLC (United States) 4NASA Langley Research Ctr. (United States)
This PDF file contains the front matter associated with SPIE Proceedings Volume 12125, including the Title Page, Copyright information, Table of Contents, and Conference Committee listings.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An immersive display system is presented for remote vehicle operation, the “last mile” problem for autonomous trucks. The display creates a wraparound virtual image with a 152-degree horizontal field of view and a 36-degree vertical field of view. The user does not wear any headgear and is comfortable to use for long periods of time. The display allows the user to see the controls for the remote vehicle. The overall system size is less than 32” × 16” × 16” (600mm × 400 mm × 400 mm). The system is light weight and has low power consumption. The display system has eye limited resolution over the wide field of view. To accurately operate the remote vehicle, the overall system latency must be minimized. The system uses a fast OLED based display system. The components of latency of the display system are measured including the remote camera to image processing computer and the computer to display output.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In manned-unmanned teaming scenarios, autonomous unmanned robotic platforms with advanced sensing and compute capabilities will have the ability to perform online change detection. This change detection will consist of metric comparisons of sensor-based spatial information with information collected previously, for the purpose of identifying changes in the environment that could indicate anything from adversarial activity to changes caused by natural phenomena that could affect the mission. This previously collected information will be sourced from a variety of sources, such as satellite, IoT devices, other manned-unmanned teams, or the same robotic platform on a prior mission. While these robotic platforms will be superior to their human operators at detecting changes, the human teammates will for the foreseeable future exceed the abilities of autonomy at interpreting any changes, particularly for relevance to the mission and situational context. For this reason, the ability of a robot to intelligently and properly convey such information to maximize human understanding is essential. In this work, we build upon previous work which presented a mixed reality interface for conveying change detection information from an autonomous robot to a human. We discuss factors affecting human understanding of augmented reality visualization of detected changes, based upon multiple user studies where a user interacts with this system. We believe our findings will be informative to the creation of AR-based communication strategies for manned-unmanned teams performing multi-domain operations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Advanced fire control technologies that utilize computer vision-guided target recognition will enable dismounted soldiers with augmented reality displays, such as the integrated visual augmentation system, enhanced situational awareness. Here we describe a virtual reality framework and environment for the design and evaluation of computer vision algorithms and augmented reality interfaces intended to enhance dismounted soldier situational awareness. For training models, synthetic image datasets of targets in virtual environments can be generated in tandem with neural network learning. To evaluate models under simulated operational environments, a dismounted soldier combat scenario was developed. Trained models are used to process input from a “virtual camera” in-line with a rifle-mounted telescopic sight. Augmented reality overlays are projected over the sight’s optics, modeling the function of current state-of-the-art holographic displays. To assess the impact of these capabilities on situational awareness, performance metrics and physiological monitoring were integrated into the system. To investigate how sensors beyond visible wavelength optical imaging may be leveraged to enhance this capability, particularly in degraded visual environments, the virtual camera framework was extended to introduce methods for simulating multispectral infrared imaging. Thus, this virtual reality framework provides a platform for evaluating multispectral computer vision algorithms under simulated operational conditions, as well as iteratively refining the design of augmented reality displays. Improving the design of these components in virtual reality provides a rapid and cost-effective method for refining specifications and capabilities toward a field-deployable system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Binocular head-mounted displays (HMD) utilizing augmented reality (AR) strategies can greatly increase the information that reaches the visual system of the user. For example, binocular presentation allows for elements to appear in stereoscopic depth and with a higher perceived resolution and AR can improve the quality of a low visibility scene. But with two independent optical channels, a binocular HMD can easily become misaligned, which can potentially be detrimental to both performance and comfort (Gavrilescu et al., 2019; SPIE DCS). Here, we quantify the effect that global binocular misalignment in an HMD has on both operational and visual performance during a simulated flying task. Using a platform consisting of 3 85-inch displays providing out-the-window imagery and head-tracked AR overlay (e.g., DAS) within the HMD (Posselt et al., 2021; SPIE DCS), subjects were instructed to adhere to flight commands while periodically discriminating the orientation of a target aircraft. In different blocks the two optics of the HMD were either well-aligned, misaligned vertically by 0.67°, or rolled in opposite directions by 4°. In the well-aligned condition, subjects could discriminate the orientation of the target plane on average nearly 1000 ft farther than in either of the misaligned conditions. Curiously, adherence to the flight commands was affected only by the vertical misalignment, which may represent a strategy of selectively ignoring grossly misaligned imagery in one eye. These results obviate the need to quantify and maintain well-aligned visual channels in binocular HMDs that utilize AR imagery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Environmental factors and weather conditions pose numerous challenges to helicopter pilots and crew in offshore missions. Fog or the lack of visual cues over the sea, for example, can lead to disorientation and a lack of situational awareness. To increase safety and the operational capability of helicopters in offshore missions to wind farms, a pilot assistance system, providing offshore wind farm specific information in the head-up display, obstacle awareness, route planning and navigation support, has been investigated and evaluated. The head-up visual display consists of different sets of symbology for flight guidance, navigation and obstacle avoidance to increase situational awareness. Furthermore, a path planning algorithm ensures an obstacle-free flight path to a selected target which is displayed as “tunnel-in-the-sky”. Besides the head-up visual system, head-down display information is supplemented by infrared camera-based imagery embedded into a synthetic overlay (combined vision) by target information and traffic information from Automatic Dependence Surveillance Broadcast (ADS-B) and Automatic Identification System (AIS) signals. Additionally, a digital mission/navigation map which is complemented by specific offshore-relevant information for wind farms, is provided on a head-down display. To evaluate the pilot assistance system, two simulation campaigns (2019 and 2021) have been conducted in DLR’s Air Vehicle Simulator (AVES). Both piloted studies focused on workload impact and situational awareness as well as conclusions to further improve the system according to operational needs. In addition, a system demonstrator has been integrated into an Airbus H145 helicopter test platform supplementing system evaluation in a flight campaign. This paper will present an overview of the pilot assistance system and the results from the second piloted simulation campaign completed in September 2021, complemented by pilot feedback from the flight demonstration performed by Airbus.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Manned and unmanned helicopters are designed to accomplish missions at low altitudes. This flight environment is congested with a variety of hazards including vertical obstacles with small cross sections. It is challenging to see or sense these protrusions in order to avoid catastrophic collisions. Twelve participants from industry, military and government discussed the difficulty of detection and merits of various mitigations for avoiding vertical obstacles. Active military and civilian helicopter pilots were paired with a variety of engineers in four 2-hour online focus groups. Descriptive codes were manually assigned to the transcripts. Mitigation themes and sentiments emerged. Commentary highlighted the current burden on pilots as manual - largely visual - processors of multiple data sources. Degraded visual environments, even with visual enhancement, were cited as a challenge to obstacle detection. Clear air detection was also a challenge. Mitigations included experience, planning, and technology. In most cases database accuracy and completeness was overestimated or unknown. This compounding error increased confusion about the location of these sources of potential catastrophe. Flight operations in the vertical obstacle environment are unacceptably risky, and safely navigating the obstacle environment largely relies on avoiding occupied airspace indicated by this imperfect data. However, the increasing prevalence of both obstacles and aircraft necessitates a comprehensive strategy to reduce the catastrophic risk. Aircraft operators in this environment face many attentional demands that merit additional research to have a prospect of Advanced Air Mobility.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Within the project “US-German Advanced Technologies for Rotorcraft Project Agreement (PA-ATR)” a human-machine interaction concept with the purpose of increasing mode and situational awareness and simplifying authority transfer for highly automated helicopters has been developed. It combines multimodal cueing system with tactile, auditive, and visual components with an automatic trajectory following capability. The development has involved iterative optimization and evaluation with pilots using simulator and ACT/FHS research helicopter. For the simulator evaluation a low-level-flight scenario was developed. The paper describes the system, explains the test environment and presents the evaluation results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Atmospheric fog is a common degraded visual environment (DVE) that reduces sensing and imaging range and resolution in complex ways not fully encapsulated by traditional metrics. As such, better physical models are required to describe imaging systems in a fog environment. We have developed a tabletop fog chamber capable of creating repeatable fog-like environments for controlled experimentation of optical systems within this common DVE. We present measurement of transmission coefficients and droplet size distribution in a multiple scattering regime using this chamber.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Virtual reality (VR) and Augmented reality (AR) near-to-eye systems can offer virtual surround-the-user screen sizes, excellent portability, telepresence, and enhanced situational awareness, to name just a few attributes for both military and civilian applications. Advancements of 8 critical technology enablers may soon resolve the largest issues of economic, functionality, and human factors that have impeded broad-acceptance of near-to-eye imaging. There have recently been substantial improvements in these enabling technologies which are reaching critical viability or are moving out of earlystage research and development with cost-effective manufacturing potential. Each of these enabling technologies are discussed and their potential impact on the future of AR and VR assessed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Aircraft head-up displays (HUDs) have historically used monochromatic symbology to relay critical information to the pilot. The main reasons for using a single color are the favorable spectral overlap of the green phosphors of the emission source with the peak of human luminance sensitivity, and the difficulty of achieving performance requirements at two widely separated wavelengths. However, using at least one additional color could greatly enhance situational awareness and effectiveness by color coding information or creating multi-functional symbology. Here we show the optical system design of a dual-color aircraft HUD which is enabled by a holographic combiner and a digital light processing chip. We found that the main optical requirements of an aircraft HUD, namely resolution and luminance, can be met for the two wavelengths using a multilayer hologram which selectively diffracts each color to the pilot. By use of a hologram as the combiner, the direct view to the pilot can be free of both tint and forward light signature. Furthermore, the holographic combiner means that a steeper input angle can be used, leading to a more compact optical path compared to traditional reflective or refractive counterparts. The results demonstrate a path forward for dual-color aircraft HUD based on a hologram/digital chip combination.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The proliferation of AR/MR/XR technology continues to grow. However, with few exceptions, the advances in motion tracking to support placement of symbols and images on the real world scene have not kept up. In order for AR/MR/XR to move from narrow niche applications to more general use, motion tracking providing accurate pose will also need to work in diverse environments and conditions. Many additional factors comprise tracking performance, but they are more easily addressed. This paper uses several real-world, dismounted and mounted scenarios to determine the minimum tracking accuracy needed to achieve success. It then proposes a minimum performance specification for motion tracking to support general AR/MR/XR displays.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Enabling leaders with the ability to make decisive actions in high operational tempo environments is key to achieving decision-superiority. Under stressful battlefield conditions with little to no time for communication, it is critical to acquire relevant tactical information quickly to inform decision-making. A potential augmentation to tactical information systems is access to real-time analytics on a unit's operating status and emergent behaviors inferred from soldier-worn or embedded sensors on their kit. Automatic human activity recognition (HAR) has been greatly achievable in recent years thanks to advancements in algorithms and ubiquitous low-cost, yet powerful processors, hardware and sensors. In this paper, we present weapon-born sensor measurement acquisition, processing, and HAR approaches to demonstrate Soldier state estimation in a target acquisition and tracking experiment. The Soldier states that were classified include whether the Soldier is resting, tracking a target, transitioning between potential targets, or firing a shot at the target. We implemented Multivariate Time Series Classification (TSC) using the SKTime toolkit to perform this task and discuss the performance from various classification methods. We also discuss a framework for efficient transference of this information to other tactical information systems on the network.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents research concerning the use of visual-inertial Simultaneous Localization And Mapping (SLAM) algorithms to aid in Continuous Wave (CW) radar target mapping. SLAM is an established field in which radar has been used to internally contribute to the localization algorithms. Instead, the application in this case is to use SLAM outputs to localize radar data and construct three-dimensional target maps which can be viewed live in augmented reality. These methods are transferable to other types of radar units and sensors, but this paper presents the research showing how the methods can be applied to calculate depth efficiently with CW radar through triangulation using a Boolean intersection algorithm. Localization of the radar target is achieved through quaternion algebra. Due to the compact nature of the SLAM and CW devices, the radar unit can be operated entirely handheld. Targets are scanned in a free-form manner where there is no need to have a gridded scanning layout. The main advantage to this method is eliminating many hours of usage training and expertise, thereby eliminating ambiguity in the location, size and depth of buried or hidden targets. Additionally, this method grants the user the additional power, penetration and sensitivity of CW radar without the lack of range finding. Applications include pipe and buried structure location, avalanche rescue, structural health monitoring and historical site research.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The scientific literature on light field visualization has recently started addressing the 3D interaction techniques for light field displays, as well as the simulations for realistic physical camera motions. Both of them are highly relevant to civilian and military use cases. In case of the latter, 3D interactions are used for various purposes, including the control of surface and air vehicles, strategic and tactical planning, and real-time support of operations. The topic of realistic physical camera motion is particularly important for field surveillance and the situation awareness of dismounted operators, as well as for the civilian use case of cinematography. Yet thus far, such techniques have not been perceptually evaluated by non-expert observers. While the theoretical feasibility has already been investigated and expert reviews have initiated the first steps, data on actual perceptual preference is still lacking. The term “expert” refers to scientists, researchers and manufacturing professionals. However, at the end of the day, the efficiency of use cases is fundamentally determined by the observers’/operators’ perceptual convenience of 3D visualization and the effectiveness of interactions. In this paper, we present our results on the empirical studies carried out on perceptual preference regarding the potential interaction techniques and the different types of realistic physical camera motions. The contents of the subjective tests were displayed on a large-scale light field cinema system. Multiple subjective quality metrics were used to decompose the visual experience of the observers, and additional attention was paid to the essential aspects of long-term usage, such as dizziness.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Over the past decade, Virtual Reality (VR) devices have not only emerged on the consumer market, but in various civilian and military use cases as well. One of the most important differences between the typical forms of VR entertainment and utilization in professional contexts is that operation may not be interruptible in case of the latter. For example, while the continuity of spatial surveillance and threat detection is indeed vital to the success and safety of tactical military scenarios, the operator may be affected by perceptual fatigue, particularly after extended periods of VR equipment usage. The same is applicable to both ground and air reconnaissance, as well as piloting and targeting. However, the thresholds of perceptual fatigue are affected by numerous human factors, equipment attributes and content parameters, many of which are not yet addressed by the scientific literature. In this paper, we present our large-scale study on the thresholds of perceptual fatigue for VR visualization. Five levels of fatigue are differentiated in order to examine the correlations between human perceptual endurance and the investigated test conditions in more detail. The experiments distinguish content based on motion vectors and object size relative to the space of perceivable 3D visualization. The majority of the exhaustive tests are analogous to the different zoom levels of visual capture equipment. Therefore, our work highlights optimal device settings to minimize the potential perceptual fatigue, and thus to support longer periods of uninterrupted operation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.