PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Comprehensive situation awareness is very important for aircrews to handle complex situations like landing approaches or taxiing, especially under adverse weather conditions. Thus, DLR's Institute of Flight Guidance is developing an Enhanced Vision System that uses different forward looking imaging sensors to gain information needed for executing given tasks. Furthermore, terrain models, if available, can be used to control as well as to support the sensor data processing. Up to now, the most promising sensor due to its lowest weather dependency compared to other imaging sensors seems to be a 35 GHz MMW radar from DASA, Ulm, which provides range data with a frame rate of about 16 Hz. In previous contributions first experimental results of our radar data processing have been presented. In this paper we deal with radar data processing in more detail. Automatic extraction of relevant features for landing approaches and taxiing maneuvers will be focused. In the first part of this contribution we describe a calibration of the MMW radar which is necessary to determine the exact relationship between raw sensor data (pixels) and world coordinates. Furthermore, a calibration gives us an idea how accurate features can be located in the world. The second part of this paper is about our approach for automatically extracting features relevant for landing and taxiing. Improvements of spatial resolution as well as noise reduction are achieved with a multi frame approach. The correspondence of features in different frames is found with the aid of navigation sensors like INS or GPS, but can also be done by tracking methods. To demonstrate the performance of our approach we applied the extraction method on simulated data as well as on real data. The real data have been acquired using a test van and a test aircraft, both equipped with a prototype of the imaging MMW Radar from DASA, Ulm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Weather- and daylight independent operation of modern traffic systems is strongly required for an optimized and economic availability. Mainly helicopters, small aircraft and military transport aircraft operating frequently close to the ground have a need for effective and cost-effective Enhanced Vision sensors. The technical progress in sensor technology and processing speed offer today the possibility for new concepts to be realized. Derived from this background the paper reports on the improvements which are under development within the HiVision program at DaimlerChrysler Aerospace. A sensor demonstrator based on FMCW radar technology with high information update-rate and operating in the mm-wave band, has been up-graded to improve performance and fitted to fly on an experimental base. The results achieved so far demonstrate the capability to produce a weather independent enhanced vision. In addition the demonstrator has been tested on board a high- speed ferry at the Baltic sea, because fast vessels have a similar need for weather-independent operation and anti- collision measures. In the future one sensor type may serve both 'worlds' and help ease and save traffic. The described demonstrator fills up the technology gap between optical sensors (Infrared) and standard pulse radars with its specific features such as high speed scanning and weather penetration with the additional benefit of cost-effectiveness.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recently the demand to supplement existing airborne radar systems with enhanced forward-looking abilities has increased considerably. Available radar systems are not able to fulfill the needed requirements. Here a new approach is been proposed to cover the forward lying sector with respect to the flight path. The new radar system has been denoted as SIREV (Sector Imaging Radar for Enhanced Vision) and is presently under development at DLR. Due to the all-weather capability of the system and its ability to produce high quality radar images either as top view (mapping mode) or as pilot view (central perspective mode) the system is especially qualified for navigation support, autonomous landing approaches or taxi support on the ground. In this paper the authors will especially investigate the azimuth properties of the new system. Azimuth bandwidth and resolution will be calculated and discussed as functions of an arbitrary illuminated sector. Finally a short compilation of the system parameters fixed on basis of these investigations will conclude the description of the new SIREV radar system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The analysis of accidents focused our work on the avoidance of 'Controlled Flight Into Terrain' caused by insufficient situation awareness. Analysis of safety concepts led us to the design of the proposed synthetic vision system that will be described. Since most information on these 3D-Displays is shown in a graphical way, it can intuitively be understood by the pilot. What are the new possibilities using SVS enhancing situation awareness? First, detection of ground collision hazard is possible by monitoring a perspective Primary Flight Display. Under the psychological point of view it is based on the perception of expanding objects in the visual flow field. Supported by a Navigation Display a local conflict resolution can be mentally worked out very fast. Secondly, it is possible to follow a 3D flight path visualized as a 'Tunnel in the sky.' This can further be improved by using a flight path prediction. These are the prerequisites for a safe and adequate movement in any kind of spatial environment. However situation awareness requires the ability of navigation and spatial problem solving. Both abilities are based on higher cognitive functions in real as well as in a synthetic environment. In this paper the current training concept will be analyzed. Advantages resulting from the integration of a SVS concerning pilot training will be discussed and necessary requirements in terrain depiction will be pinpointed. Finally a modified Computer Based Training for the familiarization with Salzburg Airport for a SVS equipped aircraft will be presented. It is developed by Darmstadt University of Technology in co-operation with Lufthansa Flight Training.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fundamental to vision enhancement systems for vehicles are sensors that provide precise, real-time attitude and heading information. While attitude and heading reference systems (AHRS) based on high-performance inertial components exist, such systems are traditionally very expensive and inapplicable to price-sensitive applications/markets. This paper demonstrates an AHRS that combines small, low-cost, solid- state inertial sensors with an advanced GPS-derived attitude determination technology. The resulting AHRS is inexpensive, rugged, and provides information with the accuracy and responsiveness needed to drive many enhanced/synthetic-vision displays, such as tunnel-in-the-sky guidance displays. The instrument also provides GPS position, velocity, and time information. Further, by using complementary and overlapping sources of information for integrity checking, the GPS/Inertial AHRS has robustness to individual sensor failures and can operate safely if GPS were to become unavailable.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Low cost, high performance graphics solutions based on PC hardware platforms are now capable of rendering synthetic vision of a pilot's out-the-window view during all phases of flight. When coupled to a GPS navigation payload the virtual image can be fully correlated to the physical world. In particular, differential GPS services such as the Wide Area Augmentation System WAAS will provide all aviation users with highly accurate 3D navigation. As well, short baseline GPS attitude systems are becoming a viable and inexpensive solution. A glass cockpit display rendering geographically specific imagery draped terrain in real-time can be coupled with high accuracy (7m 95% positioning, sub degree pointing), high integrity (99.99999% position error bound) differential GPS navigation/attitude solutions to provide both situational awareness and 3D guidance to (auto) pilots throughout en route, terminal area, and precision approach phases of flight. This paper describes the technical issues addressed when coupling GPS and glass cockpit displays including the navigation/display interface, real-time 60 Hz rendering of terrain with multiple levels of detail under demand paging, and construction of verified terrain databases draped with geographically specific satellite imagery. Further, on-board recordings of the navigation solution and the cockpit display provide a replay facility for post-flight simulation based on live landings as well as synchronized multiple display channels with different views from the same flight. PC-based solutions which integrate GPS navigation and attitude determination with 3D visualization provide the aviation community, and general aviation in particular, with low cost high performance guidance and situational awareness in all phases of flight.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this position paper we propose a synthetic vision software system for visualizing non-precision instrument approaches based on the Virtual Reality Modeling Language (VRML) ISO/IEC CD 14772 standard which is conceptually included in the upcoming MPEG4 multimedia industry standard. In addition to the two dimensional information provided by traditional approach plates the user is presented a three dimensional representation of the approach with the ability to interactively investigate approach parameters. Due to its hardware independence, the system is scalable from general aviation to air transport and defense applications. As a proof of concept, we have modeled the non-precision approach (LOC/DME-C) to Eagle County Regional Airport (EGE) in Colorado.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the use of an optical image correlation system that locates certain landmarks such as a runway or illuminated pattern in a video camera signal. These landmarks are correlated with stored information in order to determine orientation information for the airplane. This orientation information includes the x, y, z position information as well as the roll, pitch, and yaw attitude information (the six degrees of freedom of the aircraft -- 6 DOF). The airplane orientation information is especially useful for controlling or guiding an airplane during the final landing phase of the flight. Recently the use of global positioning systems (GPS) in airplanes has become popular. These GPS systems receive signals from multiple satellites to determine the position of an airplane. The positional information from a GPS system is typically accurate within plus or minus 100 feet. This is quite adequate in determining the airplane's position during most phases of flight but is not accurate enough to control the landing of the airplane. For example, a typical general aviation runway may have a width of about 75 feet compared to the 100 feet accuracy range of a typical GPS system. Despite innovations such as differential GPS systems, the relative accuracy of the GPS systems is unlikely to improve much in the future. The system described here provides airplane orientation information accurate enough to control the landing of the airplane.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In future aircraft cockpit designs SVS (Synthetic Vision System) databases will be used to display 3D physical and virtual information to pilots. In contrast to pure warning systems (TAWS, MSAW, EGPWS) SVS serve to enhance pilot spatial awareness by 3-dimensional perspective views of the objects in the environment. Therefore all kind of aeronautical relevant data has to be integrated into the SVS-database: Navigation- data, terrain-data, obstacles and airport-Data. For the integration of all these data the concept of a GIS (Geographical Information System) based HQDB (High-Quality- Database) has been created at the TUD (Technical University Darmstadt). To enable database certification, quality- assessment procedures according to ICAO Annex 4, 11, 14 and 15 and RTCA DO-200A/EUROCAE ED76 were established in the concept. They can be differentiated in object-related quality- assessment-methods following the keywords accuracy, resolution, timeliness, traceability, assurance-level, completeness, format and GIS-related quality assessment methods with the keywords system-tolerances, logical consistence and visual quality assessment. An airport database is integrated in the concept as part of the High-Quality- Database. The contents of the HQDB are chosen so that they support both Flight-Guidance-SVS and other aeronautical applications like SMGCS (Surface Movement and Guidance Systems) and flight simulation as well. Most airport data are not available. Even though data for runways, threshold, taxilines and parking positions were to be generated by the end of 1997 (ICAO Annex 11 and 15) only a few countries fulfilled these requirements. For that reason methods of creating and certifying airport data have to be found. Remote sensing and digital photogrammetry serve as means to acquire large amounts of airport objects with high spatial resolution and accuracy in much shorter time than with classical surveying methods. Remotely sensed images can be acquired from satellite-platforms or aircraft-platforms. To achieve the highest horizontal accuracy requirements stated in ICAO Annex 14 for runway centerlines (0.50 meters), at the present moment only images acquired from aircraft based sensors can be used as source data. Still, ground reference by GCP (Ground Control-points) is obligatory. A DEM (Digital Elevation Model) can be created automatically in the photogrammetric process. It can be used as highly accurate elevation model for the airport area. The final verification of airport data is accomplished by independent surveyed runway- and taxiway- control-points. The concept of generation airport-data by means of remote sensing and photogrammetry was tested with the Stuttgart/Germany airport. The results proved that the final accuracy was within the accuracy specification defined by ICAO Annex 14.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As presented in previous contributions within SPIE's Enhanced and Synthetic Vision Conferences, DLR's Institute of Flight Guidance is involved in the design, development and testing of enhanced vision systems for flight guidance applications. The combination of forward looking imaging sensors (such as DaimlerChrysler's HiVision millimeter wave radar), terrain data stored in on-board databases plus information transmitted from ground or on-board other aircraft via data link is used to give the air crew an improved situational awareness. This helps pilots to handle critical tasks, such as landing approaches and taxiing especially under adverse weather conditions. The research and development of this system was mostly funded by a national research program from mid of 1996 to mid of 1999. On one hand this paper will give a general overview about the project and the lessons learned. Results of flight tests carried out recently will be shown as well as brief looks into evaluation tests on-board an Airbus A-340 full flight simulator performed mid of 1998 at the Flight Simulation Center Berlin. On the other hand an outlook will be presented, which shows enhanced vision systems as a major player in the theater of pilot assistance systems as they are under development at DLR's Institute of Flight Guidance in close cooperation with the University of the Federal Armed Forces in Munich, Germany.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As part of a continuing collaboration between the University of Manchester and British Aerospace, a signal processing array has been constructed to demonstrate that it is feasible to compensate a video signal for the degradation caused by atmospheric haze in real-time. Previously reported work has shown good agreement between a simple physical model of light scattering by atmospheric haze and the observed loss of contrast. This model predicts a characteristic relationship between contrast loss in the image and the range from the camera to the scene. For an airborne camera, the slant-range to a point on the ground may be estimated from the airplane's pose, as reported by the inertial navigation system, and the contrast may be obtained from the camera's output. Fusing data from these two streams provides a means of estimating model parameters such as the visibility and the overall illumination of the scene. This knowledge allows the same model to be applied in reverse, thus restoring the contrast lost to atmospheric haze. An efficient approximation of range is vital for a real-time implementation of the method. Preliminary results show that an adaptive approach to fitting the model's parameters, exploiting the temporal correlation between video frames, leads to a robust implementation with a significantly accelerated throughput.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The NASA Johnson Space Center is developing a series of prototype flight test vehicles leading to a functional Crew Return Vehicle (CRV). The development of these prototype vehicles, designated as the X-38 program, will demonstrate which technologies are needed to build an inexpensive, safe, and reliable spacecraft that can rapidly return astronauts from onboard the International Space Station (ISS) to earth. These vehicles are being built using an incremental approach and where appropriate, are taking advantage of advanced technologies that may help improve safety, decrease development costs, reduce development time, as well as outperform traditional technologies. This paper discusses the creation of real-time 3-D displays for flight guidance and situation awareness for the X-38 program. These displays feature the incorporation of real-time GPS position data, three-dimensional terrain models, heads-up display (HUD), and landing zone designations. The X-38 crew return vehicle is unique in several ways including that it does not afford the pilot a forward view through a wind screen, and utilizes a parafoil in the final flight phase. As a result, on-board displays to enhance situation awareness face challenges. While real-time flight visualization systems limited to running on high-end workstations have been created, only flight-rated Windows are available as platforms for the X-38 3-D displays. The system has been developed to meet this constraint, as well as those of cost, ease-of-use, reliability and extensibility. Because the X-38 is unpowered, and might be required to enter its landing phase from anywhere on orbit, the display must show, in real-time, and in 3 dimensions, the terrain, ideal and actual glide path, recommended landing areas, as well as typical heads-up information. Maps, such as aeronautical charts, and satellite imagery are optionally overlaid on the 3-D terrain model to provide additional situation awareness. We will present a component-based toolkit for building these displays for use with the Windows operating systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As part of an advanced night vision program sponsored by DARPA, a method for real-time color night vision based on the fusion of visible and infrared sensors has been developed and demonstrated. The work, based on principles of color vision in humans and primates, achieves an effective strategy for combining the complementary information present in the two sensors. Our sensor platform consists of a 640 X 480 low- light CCD camera developed at MIT Lincoln Laboratory and a 320 X 240 uncooled microbolometer thermal infrared camera from Lockheed Martin Infrared. Image capture, data processing, and display are implemented in real-time (30 fps) on commercial hardware. Recent results from field tests at Lincoln Laboratory and in collaboration with U.S. Army Special Forces at Fort Campbell will be presented. During the tests, we evaluated the performance of the system for ground surveillance and as a driving aid. Here, we report on the results using both a wide-field of view (42 deg.) and a narrow field of view (7 deg.) platforms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The purpose of this study was to investigate whether a dual- band sensor fused image improves visual performance compared to a single-band image. Specifically, we compared behavioral performance using images from an uncooled LIMIRS long-wave infrared sensor and a Fairchild image intensified low-light CCD, against these same images after they had been 'fused' by combining both spectral bands into a two-dimensional color space. Human performance was assessed in two experiments. The first experiment required observers to detect target objects presented against naturalistic backgrounds and then identify whether those detected targets were vehicles or persons. The second experiment measured observers' situational awareness by asking them to rapidly discern whether an image was upright or inverted. Performance in both tasks, as measured by reaction times and error rates, was generally best with the sensor- fused images, although in some instances performance with the single band images was as good as performance using the sensor-fused images. Results suggest that sensor fusion may facilitate human performance both by facilitating target detection and recognition, and by enabling higher levels of more general situational awareness and scene comprehension.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The use of night vision devices (NVDs) has the potential for enhancing driving operations at night by allowing increased mobility and safer operations. However, with this increased capability has come the requirement to manage risks and provide suitable training. Results from field experiments and accident analyses suggest that problems experienced by drivers with NVDs can be attributed to a limited understanding of the NVD capabilities and limitations and to perceptual problems. There is little formal training available to help drivers obtain the required knowledge and skills and little opportunity to obtain and practice perceptual skills prior to driving in the operational environment. NVD users need early and continued exposure to the night environment across a broad range of visual conditions to develop and maintain the necessary perceptual skills. This paper discusses the interim results of a project to develop a Night Driving Training Aid (NDTA) for driving with image intensification (I2) devices. The paper summarizes work to validate requirements, develop instructional materials and software, and deliver the instruction in a multimedia, interactive PC environment. In addition, we discuss issues and lessons learned for training NVD driving knowledge and skills in a PC environment and extending the NDTA to thermal NVDs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the extensive test that has been performed to validate the vision-based automatic vehicle guidance system developed within the ARGO project at the University of Parma. After a brief introduction dealing with the main characteristics of the system installed onto the ARGO vehicle, the paper describes the 'MilleMiglia in Automatico' tour, its schedule and features. Finally the paper discusses the results collected during the tour and analyzes possible solutions that may be implemented in the future to make the system more robust with respect to the problems encountered during the test.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents the cooperation of optical flow and contour correspondence modules for motion and structure estimation. It enables us to overcome a certain number of problems associated to the separate use of each of the modules, by integrating complementary information. In this hybrid scheme our novelty consists of: (1) simultaneous use of several regions; for each of them the whole region and its boundary are considered as well and (2) introduction of a new iterative scheme induced by the cooperative concept.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we are interested in the design and the experiment of a control architecture for an autonomous outdoor mobile robot which mainly uses vision. We focus on the design of a mechanism that permits the dynamic selection and firing of perception processes. We propose a hybrid architecture that uses an attention mechanism which controls the robot environment awareness while managing the computational resources and allowing a fair reactivity. We describe its implementation and experimentation on a robot in an outdoor environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Augmented reality techniques can significantly enhance the situational awareness of a user by providing 3D information registered to the user's view of the world. Critical for usability of such a system is the precise registration. Outdoor AR systems usually employ a GPS for position and a hybrid combination of magnetometer and inclinometer for orientation estimation, which provide only a limited precision. At the Rockwell Science Center (RSC), an approach is being developed which uses terrain horizon silhouettes as visual cues for improving the registration precision and for sensor calibration. Since the observer position is known from GPS data, the horizon silhouette can be predicted using a digital elevation model (DEM) database. The silhouette segment, which the user sees, is extracted from a camera pointing in the same direction as the user. This segment is then matched with the complete 360 degree horizon silhouette from the DEM data. When the optimal match has been determined, camera azimuth, roll, and pitch angle can be computed. This registration approach is being implemented in an AR system for enhancing the situational awareness in an outdoor scenario which is being developed at RSC. Currently, the information augmentation is provided by a registered overlay onto a live video stream.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Coarse-coding is the transformation of raw data using a small number of broadly overlapping filters. These filters may exist in time, space, color, or other information domains. Inspired by models of natural vision processing, intensity and color information has been previously encoded and successfully decoded using coarse coding. The color and intensity of objects within test images were successfully retrieved after passing through only two coarse filters arranged in a checkerboard fashion. It was shown that a consequence of such a filter is a natural edge enhancement of the objects within the image. Coarse-coding is considered here in a signal processing frequency domain and in a sensory spectral filtering domain. Test signals include single frequency, multiple frequency, and signals with broad frequency content. Gaussian-based filters are used to discriminate between different signals of arbitrary frequency content. The effects of Gaussian shape changes and spectral contrasting techniques are demonstrated. Consequences in filter parameter selection are further discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The interactive ability of intelligent electric vehicle with human has the capital importance of convincing public to accept the existence and usage of intelligent electric vehicle, it can greatly enhance the safety of intelligent electric vehicle in public service. In this paper, an interactive model based on hand gesture understanding is represented, it offers more compact and intuitive meanings than other interactive models in an outdoor environment. The Typical hand gestures are defined to guide the motion of vehicle by considering gesture differentiation and human tendency in the model, they are classified as motion-oriented and direction-oriented gestures for different interactive intentions. The color distribution of human skin is analyzed in different color spaces, it reveals that human skin colors cluster in a specific region with the irregular appearance, they have more differences in intensity than colors among the people. A color model of human skin is built for hand gesture segmentation by using the training procedure of RCE neural network, it has the ability of delineating the pattern class with arbitrary appearance in feature space. The quality of hand gesture segmentation is further improved by the procedure of hand-forearm separation. A hand tracking mechanism is proposed to locate the hand by camera pan-tilt and zooming. The gesture recognition is implemented by template matching of multiple features.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Stanford University has developed a low-cost prototype synthetic vision system and flight tested it onboard general aviation aircraft. The display aids pilots by providing an 'out the window' view, making visualization of the desired flight path a simple task. Predictor symbology provides guidance on straight and curved paths presented in a 'tunnel- in-the-sky' format. Based on commodity PC hardware to achieve low cost, the Tunnel Display system uses differential GPS (typically from Stanford prototype Wide Area Augmentation System hardware) for positioning and GPS-aided inertial sensors for attitude determination. The display has been flown onboard Piper Dakota and Beechcraft Queen Air aircraft at several different locations. This paper describes the system, its development, and flight trials culminating with tests in Alaska during the summer of 1998. Operational experience demonstrated the Tunnel Display's ability to increase flight- path following accuracy and situational awareness while easing the task instrument flying.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The aviation industry has long sought a means of conducting all weather operations. Presently, airport lighting systems provide the only means for aiding the pilot's transition from instrument to visual acquisition of the runway environment prior to landing the aircraft. The ability for a pilot to see through fog (cloud ceiling and visibility) define the limitations for conducting operations in instrument meteorological conditions. CAT I approaches are authorized down to a runway visual range (RVR) of 2,400 feet, while CAT IIIa approaches are authorized to an RVR of 700 ft. Enhanced vision technologies are being investigated to improve the ability of the pilot to acquire the visual cues (predominantly airport lighting systems) to the runway environment. If enhanced vision enabled the pilot to see 3.5 times farther than the unaided eye, CAT I operations could be conducted under CAT IIIa conditions. This paper examines the relative theoretical and experimental performance of several enhanced vision technologies. This performance analysis compares the runway light detection capability of various infrared sensors with the eye during the dynamics of an aircraft approach and landing. This analysis further compares the IR performance with FogEye, a UV sensor, and a Laser Visual Approach system. The analysis indicates that although the 1.5 micron and 3 to 5 micron IR sensors are capable of improving on the unaided eye, especially in haze and low density fog conditions, only the UV sensor, coupled with relatively minor changes to airport light lenses (to not attenuate UV light), provides the potential to aid the pilot in seeing airport lighting 3.5 times farther than the unaided eye. An 8 to 11 micron IR sensor can support enhanced vision of the actual airport surface. These electro optical capabilities are further compared with the capabilities of Millimeter Wave (MMW) systems. Additional collateral features that would aid in more orderly and safer landing operations are also described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The analysis of accidents focused our work on the avoidance of 'Controlled Flight Into Terrain' caused by insufficient situation awareness. Analysis of safety concepts led us to the design of the proposed synthetic vision system that will be described. Since most information on these 3D-Displays is shown in a graphical way, it can intuitively be seized by the pilot. One key element of SVS is terrain depiction, that is the topic of this paper. Real time terrain depiction has to face two requirements. On the one hand spatial awareness requires recognition of synthetic environment demanding characteristics. On the other hand the number of rendered polygons has to be minimized due to limitations of real time image generation performance. Visual quality can significantly be enhanced if equidistant data like Digital Elevation Model data (DEM) are vectorized. One method of data vectorization will be explained in detail and advantages will be mentioned. In Virtual Reality (VR) applications, conventional decimation software degrades the visual quality of geometry that is compensated by complex textures and lighting. Since terrain decimated with those tools looses its characteristics, and textures are not acceptable for several reasons, a terrain specific decimation has to be performed. How can a Digital Elevation Model (DEM) be decimated without decreasing the visualization value? In this paper, extraction of terrain characteristics and adapted decimation will be proposed. Steps from DEM to Terrain Depiction Data (TDD) are discussed in detail.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.