PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Mark S. Dennison Jr.,1 David M. Krum,2 John (Jack) N. Sanders-Reed,3 Jarvis (Trey) J. Arthur III4
1U.S. Army Research Lab. (United States) 2California State Univ., Los Angeles (United States) 3Image & Video Exploitation Technologies, LLC (United States) 4NASA Langley Research Ctr. (United States)
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Organized by experts from across academia, industry, the Federal Labs, and SPIE, this meeting will highlight emerging capabilities in immersive technologies and degraded visual environments as critical enablers to future multi-domain operations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automated Target Detection and Recognition (ATD/R) of targets in a cluttered background presents major challenges for the designers of military sensor systems. To achieve acceptable levels of detection and recognition performance, extensive use is often made of complex image processing techniques such as neural network architectures which attempt to replicate the capabilities of the human vision system. Although such methods generate good levels of performance, they are often not suitable for those applications where smaller and lower cost sensor systems are used across a more diverse range of scene content and imaging conditions. A new approach is proposed here where system performance is achieved through a more effective balance between optical domain and post-detection processing. Specifically, the Signal to Clutter Ratio (SCR) is maximised by using broadband spectral and polarisation information to offset performance deficiencies associated with simple image and data processing functions. It is shown that this approach offers a basis for introducing ATD/R functionality in low cost imaging systems such as those flown on drones. A simple imaging system is used to demonstrate the concept, which compromises two broadband cameras operating in the visible and near infrared bands, and with one of the sensors additionally providing polarimetric information. The concept of a joint spectral polarisation weight map is proposed and the potential performance gain is illustrated using targets in moderate to high clutter situations. The results obtained indicate potential benefits for future ATD/R systems and it is hoped that this will encourage future design engineers to consider the wider use of optical domain information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A well-known possibility to develop and test unmanned aerial vehicles is the simulation of vehicles and environment in commercial game engines. The simulation of sensors adds valuable capabilities to these simulations. This paper aims to present a millimeter wave radar implementation for the AirSim plugin in Unreal Engine. To obtain the radar response, we use Unreal Engine and AirSim rendering outputs of surfaces normal components, semantic segmentation of various objects in the scene, and depth distance from the camera. In particular, we calculate the radar cross section for each object present in the scene separately, being thus able to have different material characteristics for different entities. To compute the power return, we take into account atmospheric attenuation of the signal, based on wavelength of the radar wave, the gain of the antenna of the radar, and the transmitted power. For greater realism we add noise in different stages of the simulation. Future works to improve the usability and the performance of the simulator are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Degraded visual environments like fog pose a major challenge to safety and security because light is scattered by tiny particles. We show that by interpreting the scattered light it is possible to detect, localize, and characterize objects normally hidden in fog. First, a computationally efficient light transport model is presented that accounts for the light reflected and blocked by an opaque object. Then, statistical detection is demonstrated for a specified false alarm rate using the Neyman-Pearson lemma. Finally, object localization and characterization are implemented using the maximum likelihood estimate. These capabilities are being tested at the Sandia National Laboratory Fog Chamber Facility.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Army Aviation mission scenarios have inherent risks associated with them and risks are amplified under degraded visual environment (DVE) flight conditions. As part of the Aviator Risk Assessment Model (AvRAM), DVE obscurants (i.e., rain, fog, dust, smoke, and snow) have been modeled using published sensor penetration data for visible, infrared, and lower frequency bands. Visibility calculations as well as time and distance calculations for target detection were performed for different sensor configurations and sensitivities, environmental conditions, and airspeed. The AvRAM includes a simplified target identification paradigm that incorporates probability of correct identification as a function of distance and time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
While AR has successfully been deployed in the military air domain for decades, its use in the ground domain poses serious challenges. Some of these challenges result from technological limitations. However, others are more difficult or even impossible to resolve since they reflect fundamental human characteristics. The toughest-to-solve limitations are caused by our physiology, anatomy, and cognition. Eye physiology limitations are masking, contrast, and occlusion. The anatomical shape of the human head forces optics to be mounted in front of the inherently glare-protecting eye sockets. The problems of the brain with respect to AR are i) it’s not ‘build’ to perceive transparency, ii) its limited cognitive capacity, and iii) we intuitively use a world-referenced system. In this paper, we provide an in-depth analysis of these human factor limitations. Conclusion: AR does not come for free. Fundamental human limitations seriously constrain see-through AR systems for the infantry and should be considered in their design and deployment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To ensure safe mission completion, Army aviators must be prepared to execute appropriate emergency procedures (EPs) in a range of situations. Augmented and Virtual Reality (AR/VR) technologies provide novel opportunities to enhance the performance of emergency procedures by presenting them in powerful simulations of the operational environment. Currently, USAARL is developing a VR EP research simulator to support systematic, human performance research, development, test, and evaluation (RDT&E) programs to investigate and thereby enhance the effective, safe execution of EPs. Factors to be investigated include workload, flight maneuver, displays, multisensory stimuli, and aircrew coordination, as well psychophysiological and operational stressors known to potentially impact EP execution. The USAARL EP simulator is being developed and operates within the Unity Real-Time Development Platform currently instantiated with HTC Vive Pro hardware. In order to maximize immersion, virtual reality gloves are the user's method of controlling the simulation. Pupil Labs hardware and software has been integrated with the HTC Vive Pro to record, archive, and analyze synchronous, time stamped oculometric data. Engine fire and a single-engine failure are the emergencies that have been implemented to date in the EP simulator. One mode of operation permits the user to passively experience the EP with no required input. Other modes require the participant to execute the EP with predetermined cueing stimuli ranging from substantial step-by-step cueing to no cueing. Future work will incorporate additional EPs into standard maneuvers with defined physiological stressors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Virtual Reality (VR) technology lets users train for high-stakes situations in the safety of a virtual environment (VE). Yet user movement through such an environment can cause postural instability and motion sickness. These issues are often attributed to how the brain processes visual self-motion information in VEs. Low-contrast conditions, like those caused by dense fog, are known to affect observers’ self-motion perception, but it is not clear how posture, motion sickness, and navigation performance are affected by this kind of visual environment degradation. Ongoing work using VR focuses on three aspects of this problem. First, we verify the effects in VR of low contrast on visual speed estimates. Second, we test how contrast reduction affects posture control, motion sickness, and performance during a VR navigation task. Third, we examine whether it is useful to augment low-contrast conditions with high-contrast visual aids in the environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A pilot’s Helmet-Mounted Display (HMD) is now a critical part of the aircraft system. Next generation HMDs will be able to display information and imagery binocularly, with stereoscopic depth. Stereo 3D (S3D) can potentially be used to enhance situational awareness and improve performance. The degree to which performance is improved may be linked to individual visual capabilities of the user, in particular, stereo acuity. Stereo acuity varies tremendously in the general population with up to 30% being classed as ‘stereo blind’. For most military aviators there is a minimum stereo acuity standard, however current test methods are crude and fallible. Many previous S3D studies do not accurately characterize individual stereo acuity, and in some cases do not even screen for its presence, making their results difficult to interpret. The Operational Based Vision Assessment (OBVA) laboratory has developed a flight simulation platform using an SA Photonics SA-62 HMD to display stereoscopic symbology and five 85-inch displays to provide the “out the window” view. After completing a battery of vision tests, participants fly various mission profiles while responding to a combination of navigational instructions and warning alerts displayed in the HMD. The warning alerts are displayed in 2D (flashing at 1 Hz), intermittent S3D (flashing on and off at depth), persistent S3D (alternating between 2 depth planes), and dynamic S3D (motion in depth). We present preliminary data examining whether a stereoscopic HMD could be used to improve performance when responding to a critical warning alert, and discuss potential implications for military aviator vision standards as well as HMD requirements
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper studies head motion profiles from twenty-seven individual Warfighters conducting operationally relevant scenarios in order to better understand the phenomenology of head motion in high-intensity environments. This work will improve the design of future combat systems. As the military transitions from analog to digital technology, the scientific community is being confronted with new dynamic parameters of system performance, specifically: frame rate, refresh rate, and latency. For helmet-mounted visual augmentation systems (VAS), the impact of these parameters is most evident during head movement. The source data is collected by using small inertial measurement unit (IMU) data loggers affixed to Warfighters’ helmets in order to collect the Warfighter’s observation vector. This data is analyzed to determine unique characteristics of those head movements, including arc length of rotational movements and associated accelerations, scan path, observation vector amplitudes, and movement/fixation times. This paper presents findings that derive recommendations for frame rate, maximum system latency, and system resolution as informed by head motion in order to aid in requirements generation for digital VAS, including mixed reality (MR) including of Augmented Reality (AR) and Virtual Reality (VR).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the flight testing and the integration process of the Microsoft HoloLens 2 as head-mounted display with DLR's research helicopter ACT/FHS. In previous work, the HoloLens was integrated into a helicopter simulator. Now, while migrating the HoloLens into a real helicopter, the main challenge was the head tracking of the HoloLens, because it is not designed to operate on moving vehicles. Therefore, the internal head tracking is operated in a limited rotation-only mode and resulting drift errors are compensated with an external tracker, various of which have been tested in advance. The fusion is done with a Kalman filter, which contains a nonlinear weighting. Internal tracking errors of the HoloLens caused by vehicle accelerations are mitigated with a system identification approach. For calibration, the virtual world is manually aligned using the helicopter's noseboom. The external head tracker is largely automatically calibrated using an optimization approach, and therefore works for all trackers and regardless of its mounting positions on vehicle and head. Most of the pre-tests were carried out in a car, which indicates the flexibility in terms of vehicle type. The flight tests have shown that the overall quality of this head-mounted display solution is very good. The conformal holograms are jitter-free, there is no latency and errors of lower frequencies are small enough, which greatly improves immersion. Profiting from almost all features of the HoloLens 2 is a major advantage, especially for rapid research and development.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have designed a visual flight guidance system that enables manual control of aircraft operations in degraded visual environments down to Cat III, both for take-off and landing. This has been achieved by means of visual guidance cues displayed on a head up display (HUD) and whereas this is not in itself novel, our development methods and approach to verifying its operation we believe are. In order to certify the system as airworthy, compliance with the relevant airworthiness standards defined in 14 CFR part 21 and other related guidance material, needs to be demonstrated. Demonstrating compliance by actually flying the aircraft a statistically appropriate number of times is prohibitively expensive. The challenge in this case was to harmonize existing flight guidance algorithms with a faithful aero model of a new airframe and to demonstrate their effectiveness in a manner that was economically viable. To achieve this, we constructed a Digital Simulation and Verification Environment (DSVE) that hosted a Digital Twin (DT) of the system such that its operation in real conditions could be accurately predicted.
Paul Wisely's paper is being published without an associated oral presentation because he passed away on 18 March, 2021 after a long illness. Paul Wisely was a member of SPIE since 2006 and a senior member since 2011. He won SPIE's "Best Paper" award two years running, 2009 and 2010, for papers presented to the Display Technologies and Applications for Defense, Security, and Avionics III & IV conferences. Paul will be sadly missed by the many SPIE colleagues who knew him.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Robots, equipped with powerful modern sensors and perception algorithms, have enormous potential to use what they perceive to provide enhanced situational awareness to their human teammates. One such type of information is changes that the robot detects in the environment that have occurred since a previous observation. A major challenge for sharing this information from the robot to the human is the interface. This includes how to properly aggregate change detection data, present it succinctly for the human to interpret, and allow the human to interact with the detected changes, e.g., to label, discard, or even to task the robot to investigate, for the purposes of enhanced situational awareness and decision making. In this work we address this challenge through the design of an augmented reality interface for aggregating, displaying, and interacting with changes detected by an autonomous robot teammate. We believe the outcomes of this work could have significant applications to Soldiers interacting with any type of high-volume, autonomously-generated information in Multi-Domain Operations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Collaborative multi-sensor perception enables a sensor network to provide multiple views or observations of an environment, in a way that collects multiple observations into a cohesive display. In order to do this, multiple observations must be intelligently fused. We briefly describe our existing approach for sensor fusion and selection, where a weighted combination of observations is used to recognize a target object. The optimal weights that are identified control the fusion of multiple sensors, while also selecting those which provide the most relevant or informative observations. In this paper, we propose a system which utilizes these optimal sensor fusion weights to control the display of observations to a human operator, providing enhanced situational awareness. Our proposed system displays observations based on the physical locations of the sensors, enabling a human operator to better understand where observations are located in the environment. Then, the optimal sensor fusion weights are used to scale the display of observations, highlighting those which are informative and making less relevant observations simple for a human operator to ignore.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Efforts are underway across the defense and commercial industries to develop cross-reality (XR), multi-user operation centers in which human users can perform their work while aided by intelligent systems. At their core is the objective to accelerate decision-making and improve efficiency and accuracy. However, presenting data to users in an XR, multi-dimensional environment results in a dramatic increase in extraneous information density. Intelligent systems offer a potential mechanism for mitigating information overload while ensuring that critical and anomalous data is brought to the attention of the human users in an immersive interface. This paper describes such a prototype system that combines real and synthetic motion sensors which, upon detection of an event, send a captured image for processing by a YOLO cluster. Finally, we describe how a future system can integrate a decision-making component for evaluation of the resulting metadata to determine whether to inject the results into an XR environment for presentation to human users.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents research on the use of penetrating radar combined with 3-D computer vision for real-time augmented reality enabled target sensing. Small scale radar systems face the issue that positioning systems are inaccurate, non-portable or challenged by poor GPS signals. The addition of modern computer vision to current cutting-edge penetrating radar technology expands the common 2-D imaging plane to 6 degrees of freedom. Applying the fact that the radar scan itself is a vector with length equivalent to depth from the transmitting and receiving antennae, these technologies used in conjunction can generate an accurate 3-D model of the internal structure of any material for which radar can penetrate. The same computer vision device that localizes the radar data can also be used as the basis for an augmented reality system. Augmented reality radar technology has applications in threat detection (human through-wall, IED, landmine) as well as civil (wall and floor structure, buried item detection). For this project, the goal is to create a data registration pipeline and display the radar scan data visually in a 3-D environment using localization from a computer vision tracking device. Processed radar traces are overlayed in real time to an augmented reality screen where the user can view the radar signal intensity to identify and classify targets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.