The Air Force Research Laboratory's Human Effectiveness Directorate (AFRL/HE) supports research addressing human factors associated with Unmanned Aerial Vehicle (UAV) operator control stations. Recent research, in collaboration with Rapid Imaging Software, Inc., has focused on determining the value of combining synthetic vision data with live camera video presented on a UAV control station display. Information is constructed from databases (e.g., terrain, cultural features, pre-mission plan, etc.), as well as numerous information updates via networked communication with other sources (e.g., weather, intel). This information is overlaid conformal, in real time, onto the dynamic camera video image display presented to operators. Synthetic vision overlay technology is expected to improve operator situation awareness by highlighting key spatial information elements of interest directly onto the video image, such as threat locations, expected locations of targets, landmarks, emergency airfields, etc. Also, it may help maintain an operator’s situation awareness during periods of video datalink degradation/dropout and when operating in conditions of poor visibility. Additionally, this technology may serve as an intuitive means of distributed communications between geographically separated users. This paper discusses the tailoring of synthetic overlay technology for several UAV applications. Pertinent human factors issues are detailed, as well as the usability, simulation, and flight test evaluations required to determine how best to combine synthetic visual data with live camera video presented on a ground control station display and validate that a synthetic vision system is beneficial for UAV applications.
This paper describes a new approach to situation awareness that combines video sensor technology and synthetic vision technology in a unique fashion to create a hybrid vision system. Our implementation of the technology, called "SmartCam3D" (SCS3D) has been flight tested by both NASA and the Department of Defense with excellent results. This paper details its development and flight test results.
Windshields and windows add considerable weight and risk to vehicle design, and because of this, many future vehicles will employ a windowless cockpit design. This windowless cockpit design philosophy prompted us to look at what would be required to develop a system that provides crewmembers and operations personnel an appropriate level of situation awareness. The system created to date provides a real-time 3D perspective display that can be used during all-weather and visibility conditions. While the advantages of a synthetic vision only system are considerable, the major disadvantage of such a system is that it displays the synthetic scene created using “static” data acquired by an aircraft or satellite at some point in the past. The SCS3D system we are presenting in this paper is a hybrid synthetic vision system that fuses live video stream information with a computer generated synthetic scene. This hybrid system can display a dynamic, real-time scene of a region of interest, enriched by information from a synthetic environment system, see figure 1.
The SCS3D system has been flight tested on several X-38 flight tests performed over the last several years and on an ARMY Unmanned Aerial Vehicle (UAV) ground control station earlier this year. Additional testing using an assortment of UAV ground control stations and UAV simulators from the Army and Air Force will be conducted later this year.
We are also identifying other NASA programs that would benefit from the use of this technology.
The X-38 program began in early 1995 and is developing a series of test vehicles to demonstrate the low-cost technologies and methods required to develop a fully functional CRV that can rapidly return astronauts from onboard the International Space Station to earth. The X-38 program uses a gradual buildup approach and where appropriate, is taking advantage of advanced technologies that may help improve safety, decrease cost, reduce development time, and outperform traditional technologies. Four atmospheric test vehicles and one space-rated vehicle will be developed and tested during the X-38 program. The atmospheric test vehicles are known as vehicle 131 (V131), vehicle 132 (V132), vehicle 131R (V131R), and vehicle 133 (V133). The space-rated vehicle that will fly on the Shuttle in 2002, as a payload bay experiment, is known as vehicle 201 (V201).
Synthetic Vision has the potential to significantly improve the situation awareness for aircraft that do no possess windshields or windows. Windshields and windows add considerable weight, and risk to vehicle design. NASA's X-38 crew-return vehicle has a windowless cockpit design. Synthetic vision tools have been developed to provide a simulated real-time 3-D perspective to X-38 crews. This virtual cockpit window provides an all-weather, day/night situation awareness display, enriched with a wide variety of flight-related information. Already successfully demonstrated in several flight tests, this paper will discuss the challenges faced developing this system and the results of initial flight tests. While many different types of digital topography, maps, and imagery are available, seamlessly integrating the data requires new approaches not available in standard geometric information systems or flight simulation software. Since much of the data is in cylindrical geographic coordinates, and the computer display API works in Cartesian coordinates, selection of an efficient and accurate coordinate system is crucial. We will describe a new method of utilizing a multi-resolution digital topography database that provides high-resolution near-field performance (up to 1 meter) with a complete horizon model, yet retains excellent display speed. The LandForm FlightVision system employed for this purpose utilizes five different resolutions of digital topography, in order to model a flight from space to earth landing. Real-time situational awareness provided by the virtual cockpit window has been enhanced by the display of a dynamic landing rage model. This model incorporates vehicle flight characteristics and winds aloft information.
The NASA Johnson Space Center is developing a series of prototype flight test vehicles leading to a functional Crew Return Vehicle (CRV). The development of these prototype vehicles, designated as the X-38 program, will demonstrate which technologies are needed to build an inexpensive, safe, and reliable spacecraft that can rapidly return astronauts from onboard the International Space Station (ISS) to earth. These vehicles are being built using an incremental approach and where appropriate, are taking advantage of advanced technologies that may help improve safety, decrease development costs, reduce development time, as well as outperform traditional technologies. This paper discusses the creation of real-time 3-D displays for flight guidance and situation awareness for the X-38 program. These displays feature the incorporation of real-time GPS position data, three-dimensional terrain models, heads-up display (HUD), and landing zone designations. The X-38 crew return vehicle is unique in several ways including that it does not afford the pilot a forward view through a wind screen, and utilizes a parafoil in the final flight phase. As a result, on-board displays to enhance situation awareness face challenges. While real-time flight visualization systems limited to running on high-end workstations have been created, only flight-rated Windows are available as platforms for the X-38 3-D displays. The system has been developed to meet this constraint, as well as those of cost, ease-of-use, reliability and extensibility. Because the X-38 is unpowered, and might be required to enter its landing phase from anywhere on orbit, the display must show, in real-time, and in 3 dimensions, the terrain, ideal and actual glide path, recommended landing areas, as well as typical heads-up information. Maps, such as aeronautical charts, and satellite imagery are optionally overlaid on the 3-D terrain model to provide additional situation awareness. We will present a component-based toolkit for building these displays for use with the Windows operating systems.
The Rockwell Digital Imagery Standard (RDIS) format for images was developed in
response to the need for an independent image format device. The Rockwell Digital
Image Standard format has been designed to be device independent. The RDIS format
releases technical staff from hardware limitations and allows the use of whatever
image display system is available to them.