PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
We have analyzed and experimentally tested the feasibility of thin wire detection using millimeter wave radar. The radar system includes a novel, fast scanning antenna and a transceiver/signal processor unit from BAE systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper speaks about work conducted in 1998 and 1999 by AEROSPATIALE MATRA in development of an obstacle detection system, which has been tested on a demonstrator aircraft in Toulouse. The purpose of this mock- up was to verify the feasibility of a passive technology, and to consider the limits of its use.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Navigation, especially in aviation, has been plagued since its inception with the hazards of poor visibility conditions. Our ground vehicles and soldiers have difficulty moving at night or in low visibility even with night vision augmentation because of the lack of contrast and depth perception. Trying to land an aircraft in fog is more difficult yet, even with radar tracking. The visible and near-infrared spectral regions have been ignored because of the problem with backscattered radiation from landing light illumination similar to that experienced when using high beam headlights when driving in fog. This paper describes the experimentation related to the development of a visible/near-infrared active hyperstereo vision system for landing an aircraft in fog. Hyperstereo vision is a binocular system with baseline separation wider than the human interocular spacing. The basic concept is to compare the imagery obtained from alternate wings of the aircraft while illuminating only from the opposite wing. This produces images with a backscatter radiation pattern that has a decreasing gradient away from the side with the illumination source. Flipping the imagery from one wing left to right and comparing it to the opposite wing imagery allows the backscattered radiation pattern to be subtracted from both sets of imagery. The use of retro-reflectors along the sides of the runway allows the human stereo fusion process to fuse the forward scatter blurred hyperstereo imagery of the array of retro-reflectors while minimizing backscatter. The appropriate amount of inverse point spread function deblurring is applied for improved resolution of scene content to aid in detection of objects on the runway. The experimental system is described and preliminary results are presented to illustrate the concept.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The acquisition of navigation data is an important upgrade of enhanced vision (EV) systems. E.g. the position from an aircraft relative to the runway during landing approaches has to be derived from data of the EV sensors directly, if no ILS or GPS navigation information is available. Due to its weather independence MMW radar plays an important role among possible EV sensors. Generally, information about the altitude of the aircraft relative to a target ahead (the runway) is not available within radar data. A common approach to overcome this so called vertical position problem is the use of the Flat Earth Assumption, i.e. the altitude above the runway is assumed to be the same as the actual altitude of the aircraft measured by the radar altimeter. Another approach known from literature is to combine different radar images from different positions similar to stereo and structure from motion approaches in computer vision. In this paper we present a detailed investigation of the latter approach. We examine the correspondence problem with regard to the special geometry of radar sensors as well as the principle methodology to estimate 3D information from different rage angle measurements. The main part of the contribution deals with the question of accuracy: What accuracy can be obtained? What are the influences of factors like vertical beam width range and angular resolution of the sensor relative transformation between different sensor locations, etc. Based on this investigation, we introduce a new approach for vertical positioning. As a special benefit this methods delivers a measurement of validity which allows the judgement of the estimation of the relative vertical position from sensor to target. The performance of our approach is demonstrated with both simulated data and real data acquired during flight tests.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many of today's and tomorrow's real-time aviation applications are demanding for accurate and reliable databases. Common TAWS implementations such as EGPWS or integrated navigation systems such as Dasa's Integrated Navigation and Flight Guidance System16 depend essentially on terrain elevation databases. Regarding these applications, the resolution, accuracy, and precision of available data are of primary concern. On the other hand, 4D Synthetic Vision Systems (SVS) require performance optimized terrain models for a real-time visualization. The content of such databases need to be reduced and accessible in a real-time format. In 4D SVS, safety critical terrain databases are essential. Even higher accuracy is required for more demanding tasks such as low-level flights, precision approaches, or landings. In this paper a process is described to accomplish the contradictory demands of accuracy and visualization performance. The complexity of hi- resolution terrain models is reduced to enhance the rendering performance. Two different decimation approaches are explained and the resulting terrain databases is described. Each representation of the generated elevation shapes comprises a coarser quantity of input data. A statistical error analysis of resulting altitude errors is presented. The presented results represent both an offline verification with highly accurate databases and a comparison with altimeter data measured by airplane sensors during flight trials. To evaluate the different databases and to examine specific terrain resolutions, multiple flight trials were performed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In future aircraft cockpits SVS will be used to display 3D physical and virtual information to pilots. A review of prototype and production Synthetic Vision Displays (SVD) from Euro Telematic, UPS Advanced Technologies, Universal Avionics, VDO-Luftfahrtgeratewerk, and NASA, are discussed. As data sources terrain, obstacle, navigation, and airport data is needed, Jeppesen-Sanderson, Inc. and Darmstadt Univ. of Technology currently develop certifiable methods for acquisition, validation, and processing methods for terrain, obstacle, and airport databases. The acquired data will be integrated into a High-Quality Database (HQ-DB). This database is the master repository. It contains all information relevant for all types of aviation applications. From the HQ-DB SVS relevant data is retried, converted, decimated, and adapted into a SVS Real-Time Onboard Database (RTO-DB). The process of data acquisition, verification, and data processing will be defined in a way that allows certication within DO-200a and new RTCA/EUROCAE standards for airport and terrain data. The open formats proposed will be established and evaluated for industrial usability. Finally, a NASA-industry cooperation to develop industrial SVS products under the umbrella of the NASA Aviation Safety Program (ASP) is introduced. A key element of the SVS NASA-ASP is the Jeppesen lead task to develop methods for world-wide database generation and certification. Jeppesen will build three airport databases that will be used in flight trials with NASA aircraft.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Synthetic Vision has the potential to significantly improve the situation awareness for aircraft that do no possess windshields or windows. Windshields and windows add considerable weight, and risk to vehicle design. NASA's X-38 crew-return vehicle has a windowless cockpit design. Synthetic vision tools have been developed to provide a simulated real-time 3-D perspective to X-38 crews. This virtual cockpit window provides an all-weather, day/night situation awareness display, enriched with a wide variety of flight-related information. Already successfully demonstrated in several flight tests, this paper will discuss the challenges faced developing this system and the results of initial flight tests. While many different types of digital topography, maps, and imagery are available, seamlessly integrating the data requires new approaches not available in standard geometric information systems or flight simulation software. Since much of the data is in cylindrical geographic coordinates, and the computer display API works in Cartesian coordinates, selection of an efficient and accurate coordinate system is crucial. We will describe a new method of utilizing a multi-resolution digital topography database that provides high-resolution near-field performance (up to 1 meter) with a complete horizon model, yet retains excellent display speed. The LandForm FlightVision system employed for this purpose utilizes five different resolutions of digital topography, in order to model a flight from space to earth landing. Real-time situational awareness provided by the virtual cockpit window has been enhanced by the display of a dynamic landing rage model. This model incorporates vehicle flight characteristics and winds aloft information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Future military transport aircraft will require a new approach with respect to the avionics suite to fulfill an ever-changing variety of missions. The most demanding phases of these mission are typically the low level flight segments, including tactical terrain following/avoidance,payload drop and/or board autonomous landing at forward operating strips without ground-based infrastructure. As a consequence, individual components and systems must become more integrated to offer a higher degree of reliability, integrity, flexibility and autonomy over existing systems while reducing crew workload. The integration of digital terrain data not only introduces synthetic vision into the cockpit, but also enhances navigation and guidance capabilities. At DaimlerChrysler Aerospace AG Military Aircraft Division (Dasa-M), an integrated navigation, flight guidance and synthetic vision system, based on digital terrain data, has been developed to fulfill the requirements of the Future Transport Aircraft (FTA). The fusion of three independent navigation sensors provides a more reliable and precise solution to both the 4D-flight guidance and the display components, which is comprised of a Head-up and a Head-down Display with synthetic vision. This paper will present the system, its integration into the DLR's VFW 614 Advanced Technology Testing Aircraft System (ATTAS) and the results of the flight-test campaign.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Rockwell Science Center is investigating novel human-computer interface techniques for enhancing the situational awareness in future flight decks. One aspect is to provide intuitive displays which provide the vital information and the spatial awareness by augmenting the real world with an overlay of relevant information registered to the real world. Such Augmented Reality (AR) techniques can be employed during bad weather scenarios to permit flying in Visual Flight Rules (VFR) in conditions which would normally require Instrumental Flight Rules (IFR). These systems could easily be implemented on heads-up displays (HUD). The advantage of AR systems vs. purely synthetic vision (SV) systems is that the pilot can relate the information overlay to real objects in the world, whereas SV systems provide a constant virtual view, where inconsistencies can hardly be detected. The development of components for such a system led to a demonstrator implemented on a PC. A camera grabs video images which are overlaid with registered information, Orientation of the camera is obtained from an inclinometer and a magnetometer, position is acquired from GPS. In a possible implementation in an airplane, the on-board attitude information can be used for obtaining correct registration. If visibility is sufficient, computer vision modules can be used to fine-tune the registration by matching visual clues with database features. Such technology would be especially useful for landing approaches. The current demonstrator provides a frame-rate of 15 fps, using a live video feed as background and an overlay of avionics symbology in the foreground. In addition, terrain rendering from a 1 arc sec. digital elevation model database can be overlaid to provide synthetic vision in case of limited visibility. For true outdoor testing (on ground level), the system has been implemented on a wearable computer.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A comprehensive safety concept is proposed for aircraft's experiencing an incident to the development of fire and smoke in the cockpit. Fire or excessive heat development caused by malfunctioning electrical appliance may produce toxic smoke, may reduce the clear vision to the instrument panel and may cause health-critical respiration conditions. Immediate reaction of the crew, safe respiration conditions and a clear undisturbed view to critical flight information data can be assumed to be the prerequisites for a safe emergency landing. The personal safety equipment of the aircraft has to be effective in supporting the crew to divert the aircraft to an alternate airport in the shortest possible amount of time. Many other elements in the cause-and-effect context of the emergence of fire, such as fire prevention, fire detection, the fire extinguishing concept, systematic redundancy, the wiring concept, the design of the power supplying system and concise emergency checklist procedures are briefly reviewed, because only a comprehensive and complete approach will avoid fatal accidents of complex aircraft in the future.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present recent work on methods for fusion of imagery from multiple sensors for night vision capability. The fusion system architectures are based on biological models of the spatial and opponent-color processes in the human retina and visual cortex. The real-time implementation of the dual-sensor fusion system combines imagery from either a low-light CCD camera (developed at MIT Lincoln Laboratory) or a short-wave infrared camera (from Sensors Unlimited, Inc.) With thermal long-wave infrared imagery (from a Lockheed Martin microbolometer camera). Example results are shown for an extension of the fusion architecture to include imagery from all three of these sensors as well as imagery from a mid- wave infrared imager (from Raytheon Amber Corp.). We also demonstrate how the results from these multi-sensor fusion systems can be used as inputs to an interactive tool for target designation, learning, and search based on a Fuzzy ARTMAP neural network.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We investigated the type of spatial structure present in nighttime imagery that is perceptually relevant for human observers to be able to perform texture-based segmentation of real world scenes. Three psychophysical tasks were developed to evaluate performance of the nighttime imagery. The test imagery consisted of scenes obtained via an image-intensified low=light CCD, a long-wave infrared sensor and monochrome sensor-fusion. For one task, performance was best with the fused imagery, but for two tasks, performance with fused imagery was not improved (compared to performance with ir imagery). Spatial filtering of the scenes and further testing revealed that the mid spatial frequencies (1-4 cpd) were more critical in determining performance than either the low or high frequencies. Fourier analysis of the scenes revealed a strong relationship between power and performance, where scenes with more power (especially at the middle frequencies) supported better performance. Implications of this research are that performance depends on power at the middle frequencies for those low-level visual tasks and that fusion algorithms may be improved if this is taken under consideration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Algorithms for image fusion were evaluated as part of the development of an airborne Enhanced/Synthetic Vision System (ESVS) for helicopter Search and Rescue operations. The ESVS will be displayed on a high- resolution, wide field-of-view helmet-mounted display (HMD). The HMD full field-of-view (FOV) will consist of a synthetic image to support navigation and situational awareness, and an infrared image inset will be fused into the center of the FOV to provide real-world feedback and support flight operations at low altitudes. Three fusion algorithms were selected for evaluation against the ESVS requirements. In particular, algorithms were modified and tested against the unique problem of presenting a useful fusion of varying quality. A pixel averaging algorithm was selected as the simplest way to fuse two difference sources of imagery. Two other algorithms, originally developed for real- time fusion of low-light visible images with infrared images, (one at the TNO Human Factors Institute and the other at the MIT Lincoln Laboratory) were adapted and implemented. To evaluate the algorithms' performance, artificially generated infrared images were fused with synthetic images and viewed in a sequence corresponding to a search and rescue scenario for a descent to hover. Application of all three fusion algorithms improved the raw infrared image, but the MIT-based algorithm generated some undesirable effects such as contrast reversals. This algorithm was also computationally intensive and relatively difficult to tun. The pixel averaging problem was simplest in terms of per-pixel operations and provided good results. The TNO-based algorithm was superior in that while it was slightly more complex than pixel averaging, it demonstrated similar results, was more flexible, and had the advantage of predictably preserving certain synthetic features which could be used to support obstacle detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The purpose of this experiment was to quantitatively measure driver performance for detecting potential road hazards in visual and infrared (IR) imagery of road scenes containing varying combinations of contrast and noise. This pilot test is a first step toward comparing various IR and visual sensors and displays for the purpose of an enhanced vision system to go inside the driver compartment. Visible and IR road imagery obtained was displayed on a large screen and on a PC monitor and subject response times were recorded. Based on the response time, detection probabilities were computed and compared to the known time of occurrence of a driving hazard. The goal was to see what combinations of sensor, contrast and noise enable subjects to have a higher detection probability of potential driving hazards.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A series of tests were conducted to assess the feasibility and performance of a fixed-field, infrared landing monitor system, located on the runway. The resultant images are used to provide enhanced vision for ground personal, in contrast to the more traditional enhanced vision for the flight crew. This paper describes the architecture and design of a dithered 320 by 240 MWIR InSb infrared camera, along with qualitative performance and test results. Issues associated with SWIR/MWIR bandpass selection, FPA type and atmospheric penetration are discussed as well as resolution requirements. Images of aircraft landing on an aircraft carrier are included for illustrative purposes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Results from field experiments and accident data analyses suggest that the majority of the problems experienced by military drivers using I2 devices, such as night vision goggles (NVGs) can be attributed to a limited understanding of their capabilities and limitations and to perceptual problems. In addition, there is evidence that skills for using NVDs for driving are highly perishable and require frequent practice for sustainment. At the present time there is little formal training available to help drivers obtain the required knowledge and skills and little opportunity to obtain and practice perceptual skills with representative imagery and scenarios prior to driving in the operational environment. The Night Driving Training Aid (NDTA) was developed for the U.S. Army to address this training deficiency. We previously reported interim results of our work to identify and validate training requirements, to develop instructional materials and customized instructional software, and to deliver the instruction in a multimedia, interactive PC environment. In this paper we focus on describing and illustrating the features and capabilities of the final prototype NDTA. In addition, we discuss technical and training issues addressed and lessons learned for developing a low cost, effective PC-based night driving training aid.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The well known road detection and tracking algorithm (RDT), developed at the Universitat der Bundeswehy Muchen (UBM), has been adapted for following unpaved paths and contour lines. The vision system consists of a color CCD-camera mounted on UBM's high bandwidth pan-tilt head, dubbed TACC. The monochrome and color signals from the camera are processed in parallel. This vision system is used for lateral control of the Primus-C experimental vehicle Digitized Wiesel 2, an air-transportable tracked tank. Image processing simultaneously exploits the results from edge-based feature extraction and area-based segmentation. Feature matching is facilitated by exploiting the photometric homogeneity along contours. A flexible algorithm for active viewing direction control was implemented. The two axis camera carrier (TACC) is controlled by fixating on a point moving on the ceterline of the road segment. A semi-autonomous initialization mode is integrated using image points on the contour to be followed, specific by an operator. Autonomous driving speeds up to 50km/h on unpaved roads was demonstrated publicly in June 1999.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an image-processing algorithm for estimating both the egomotion of an outdoor robotic platform and the structure of the surrounding terrain. The algorithm is based on correlation, and is embedded in an iterative, multi-resolution framework. As such, it is suited to outdoor ground-based and underwater scenes. Both single-camera rigs and multiple-camera rigs can be accommodated. The use of multiple synchronized cameras results in more rapid convergence of the iterative approach. We describe how the algorithm operates and gives examples of its application in several robotic domains: Autonomous mobility of outdoor robots and Underwater robots.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A stereo vision based obstacle detection system is presented. The matching process on the input stereogram is performed as an optimisation of an energy functional through a variational approach yielding dense disparity maps. The energy minimisation is implemented by a Cellular Neural Network. The state of the art of the hardware implementation of the system is presented. Some experiments on the use of the system in outdoors applications are shown. These tests demonstrate the feasibility of an obstacle detection system for an autonomous surveillance robotic platform. The real time characteristics of the hardwired version of the algorithm will allow the temporal, and spatial, integration of data, with a considerable reduction in other otherwise unavoidable data noise.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The scene interpretation and the behavior planning of a vehicle in real world traffic is a difficult problem to be solved. If different hierarchies of tasks and purposes are built to structure the behavior of a driver, complex systems can be designed. But finally behavior planning in vehicles can only influence the controlled variables: steering, angle and velocity. In this paper a scene interpretation and a behavior planning for a driver assistance system aiming on cruise control is proposed. In this system the controlled variables are determined by an evaluation of the dynamics of a two-dimensional neural field for scene interpretation and two one-dimensional neural fields controlling steering angle and velocity. The stimuli of the fields are determined according to the sensor information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The teleoperation of equipment under impoverished sensing and communication delays, cannot be handled efficiently by conventional remote control techniques. Our approach to this problem is based on an augmented reality control mode in which a graphical model of the equipment is overlaid upon real views from the work-site. A basic capability required in order to produce such an augmented reality mode is the ability to synthesize visual information from new viewpoints based upon existing ones, so as to compensate for the sparsity of real data. Our approach to the problem of image-based view synthesis is based upon the implicit construction of a 3D approximation of the scene, composed of planar triangular patches. New views are then generated by texture-mapping the available real image data onto the reprojected triangles. In order to generate a physically valid joint-triangulation which minimizes the distortions in the rendering of the new view, an iterative approach is utilized. This approach begins with an initial triangulation and refines it iteratively through node-linking alterations and a split and merge process, based upon correlation values between corresponding triangular patches. The paper presents results for both synthetic and real scenes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this contribution we present how techniques from computer graphics and computer vision can be combined to finally navigate a robot in natural environment based on visual information. The key idea is to reconstruct an image based scene model, which is used in the navigation task to judge position hypotheses by comparing the taken camera image with a virtual image created from the image based scene model. Computer graphics contributes to a method for photo-realistic rendering in real- time, computer vision methods are applied to fully automatically reconstruct the scene model from image sequences taken by a hand-held camera or a moving platform. During navigation, a probabilistic state estimation algorithm is applied to handle uncertainty in the image acquisition process and the dynamic model of the moving platform. We present experiments which proof that our proposed approach, i.e. using an image based scene model for navigation, is capable to globally localize a moving platform with reasonable effort. Using off-the-shelf computer graphics hardware even real-time navigation is possible.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Scene interpretation is a crucial problem for navigation and guidance systems. The necessary integration of a large variety of heterogeneous knowledge leads us to design an architecture that distributes knowledge and that performs parallel and concurrent processing. We choose a multi- agent approach which specialized agents implementation is based on incrementality, distribution, cooperation, attention mechanism and adaptability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Robotic teleoperation is a major research area with numerous applications. Efficient teleoperation, however, greatly depends on the provided sensory information. In this paper, an integrated radar- photometry sensor is presented. The developed sensor relies on the strengths of the two main modalties: robust radar-based range data, and high resolution dynamic photometric imaging. While radar data has low resolution and depth from motion in photometric images is susceptible to poor visibility conditions, the integrated sensor compensates for the flaws of the individual components. The integration of the two modalities is achieved by us ing the radar based range data in order to constrain the optical flow estimation, and fusing the resulting depth maps. The optical flow computation is constrained by a model flow field based upon the radar data, by using a rigidity constraint, and by incorporating edge information into the optical flow estimation. The data fusion is based upon a confidence estimation of the image based depth computation. Results with simulated data demonstrate the good potential of the approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The technique of image mosaicing was analyzed in this paper. The progress of image mosaicing comprises two parts, namely image registration and image blending. The key issue of image mosacing is to calculate the transformations (Homography) from the input image pairs correctly and accurately. Feature based methods are adopted for image registration. Firstly, corners are extracted from the images to be glued together. Matches were found from these features manually. During the process of mosaicing, three different algorithms have been investigated including (1) the four-point algorithm, (2) the least square algorithm and (3) the unbiased least square algorithm which appliced Kanatani's statistical optimization theory. Experimental results show the effectiveness of the unbiased least square algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Regarding input image as a function defined on sensor frame, the structural identification of a known object is considered as extraction of domains of sensor frame responsible for all object details. The low- level processing tools are formed by a set of detectors. The detector is introduced as a generalization of any feature detection destined to recognize a certain class of patterns. As an operation of intermediate- level processing, the detector filling is introduced. It cuts a domain of sensor frame in response to input parameters of the detector. Then, a new object representation (TSG-model) is introduced. It consists of three components: the one responsible for the object structure, the one for construction of a detail domain by domains of its sub-details, and the one reducing identification of a detail if other details are already identified. The image processing for structural identification of an object is reduced to a typical AI search. It executes operations of detector filling, interacting with the TSG-model of the object. The paper outlines a new technology that can be applied in many areas. The presented approach is flexible with respect to the kind of input sensor, say, usual gray-scale camera , stereo image system, range image system, etc.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The demand for supplementing existing airborne radar systems with enhanced forward looking abilities has considerably increased. Available radar systems are not able to accomplish the needed requirements for enhanced vision. Instead a new approach has to be taken to cover the forward lying sector with respect to the flight path. Presently a system called SIREV (Sector Imaging Radar for Enhanced Vision) is under development at DLR> Due to the all-weather capability of the system and its ability to present radar images very similar to optical images either as top view (mapping mode) or as pilot view (central perspective mode) the system is essentially qualified for navigation support, autonomous landing approaches or taxi support at the ground. IN this paper the authors will describe the idea the new SIREV system originates from and the relation of the SIREV principle to the SAR principle. Different modes of operation and thereby obtainable performance numbers will be discussed with regard to the special advantages of each sensor. Some potential applications of either sensor will be explained in detail. Finally a summarized overview of the system under development at DLR together with a description of a test field setup at Oberpfaffenhofen airfield will be given. The SIREV project at DLR was partially funded by STN Atlas Elektronik Bremen. This company also holds the SIREV license rights.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a new formulation of the Extended Chirp Scaling algorithm (ECS)1+/, suitable for the processing of data from the forward looking SAR system SIREV (Sector Imaging Radar for Enhanced Vision. This system is presently under development at the German Aerospace Center (DLR). It is shown that the SIREV data acquisition has several similarities with the ScanSAR mode of operation. Also the differences between SIREV and ScanSAR mode are analyzed. According to these differences, the ECS for ScanSAR has been modified. The modified equations of the ECS are presented and several simulation results demonstrate the good performance of the ECS for SIREV processing. The SIREV project at DLR was partially funded by STN Atlas Elektronik, Bremen. This company also holds the SIREV license rights.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Helmet Mounted Display (HMD) system has several advantages in comparison with other displays used in the aeroplane. For example, no matter in which direction the pilot looks he always can see relevant flight data. Another point is the support of the pilot by projecting sensor pictures on the visor, especially increasing the pilot's view during night flights by an overlaid FLIR picture on the visor. The weapon delivery is faster and easier assisted by the displayed symbology and the tracking system. It is possible to lock a missile without the necessity of flight maneuvers. In more than 20 night flights on a Tornado Trainer seven different test pilows have tested a binocular HMD prototype. The helmet has been proved in ergonomic aspects, readibility and visibility of the stroke symbology and an overlaid sensor flights. There are great weaknesses for example in ergonomic aspects. The final result of the tests is that the system is not yet ready for series production. There are some important points that much be overworked, especially with the view to an application in fast jets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Darmstadt University of Technology (TUD) develops displays with 3D terrain depiction to improve pilots spatial situation awareness. This will enhance safety and will lead to economical benefits by supporting new traffic procedures (i.e. SMGCS taxiing). Flight tests and simulation with a first generation of displays gave proof of the concept. However, for global usability it was necessary to rebuild the software concept. Displays of second generation utilise a worldwide database instead of a local and experimental one. With this new software concept either the database of the tools, to build it, should be certifiable. For performance reasons, logic information is included in the database. This means that the animation information about special objects is described in the database and not in the display software routines. This was the motivation to implement a new version of the software instead of an update of the first generation. A graphics library, written to build 3D graphic applications is used. This library makes optimal use of the graphic hardware and supports database. The new version provides a lot of flexibility, without decreasing the performance of the displays software. It is build for experimental research and not as a final product. The display format must have a lot of flexibility, because the software is contributing to different research projects in different configurations. It is also necessary to have a great flexibility for the interface configuration. Important steps in the development process of TUD's second generation Synthetic Vision Displays will be presented. Furthermore, features of the drawing process and the communication interface of the software will be explained.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.