PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
A low-flying autonomous rotorcraft traveling in an unknown domain must use passive sensors to detect obstacles and form a three-dimensional model of its environment. As the rotorcraft travels toward a predefined destination, it acquires images of stationary objects in its field of view. Several texture classifying operators are applied to the original intensity images to obtain "texture" images. The application of each operator to the sequence of images will form an alternate sequence of images in which pixel values encode a measure of texture. In our approach to reconstruct the environment, we divide the three dimensional space of interest ( i.e. the environment) into small cubic volumetric elements (voxels). It is assumed that the position and orientation of the the camera with respect to the environment is known. Thus, for every pixel in each image in a sequence, we can compute a ray originating at the camera center and extending through and beyond the pixel. The value observed at the pixel is assigned as an observation for all the voxels through which the ray passes. Then, using the mean and variance of the observations for each voxel, one can determine whether the voxel is full or empty. Each sequence of images is used to form a three dimensional model of the environment. The reconstruction obtained using the sequence of intensity images is not necessarily the same as that obtained using texture images. While intensity images may do well in one area of the scene, texture images may do well in others. Thus, by fusing the different environment models together, a more robust model of the environment is formed. We discuss various methods of fusing the environment models obtained using intensity as well as texture measures and discuss the advantages and disadvantages of each as related to our application. Finally, we present experimental results obtained using real image sequences.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In previous work we introduced a description of image motion called the local translational decomposition (LTD). This representation associates with image features or small image areas, a 3D unit vector describes the direction of motion of the corresponding environmental feature of small surface area. The LTD representation was shown to considerably simplify the inference of motion parameters for ego-motion. This paper exploits the rigidity constraint in order to determine the relative depth between LTD vectors. Using the rigidity constraint, along with a local surface planarity assumption, an improved LTD estimation algorithm is constructed. The effectiveness of this algorithm is demonstrated by applying it to the problem of reconstructing 3D environmental surfaces.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An automatic ego motion compensation based point correspondence algorithm is presented. A basic problem in autonomous navigation and motion estimation is automatically detecting and tracking features over consecutive frames, a challenging problem when the camera motion is significant. In general, feature displacement over consecutive frames can be approximately decomposed into two components: (i) the displacement due to camera motion which can be compensated by image rotation, scaling, and translation; (ii) the displacement due to object motion and/or perspective projection. In this paper, we introduce a two step approach: First, the motion of the camera is estimated using a computational vision based image registration algorithm. Then consecutive frames are transformed to the same coordinate system and the feature correspondence problem is solved as one of tracking moving objects using a still camera. Methods for subpixel accuracy feature matching and tracking are introduced. The approach results in a robust and efficient algorithm. Results on several real image sequences are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A broad variety of passive ranging algorithms are currently being developed and enhanced at NASA Ames and elsewhere. Some of the factors resulting in algorithm variability include (a) number of sensors (e.g., stereo), (b) type of input sensors (e.g., multispectral), and (c) output display needs (e.g., history vectors superimposed on raw video). This paper describes a cost- effective, real-time general purpose (reprogrammable) computationally scalable digital processing architecture that enables researchers to perform comparative evaluations in a laboratory or field environment. Performance benchmark studies indicate that the passive ranging algorithm developed at NASA-Ames can be executed by a 32 processor based shared memory multiprocessor architecture, implemented on two (9U) VME boards.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A significant need exists for automatic obstacle detection systems on-board rotorcraft due to the heavy work load demands that are imposed upon the pilot and crew. Such systems must augment the pilot's ability to detect and avoid obstacles for the sake of improving flight safety. The most important requirements of obstacle detection systems include a large field-of-view, a high update/frame rate, and high spatial resolution. In military systems the requirement of covertness is also present. To satisfy the requirement of covertness Honeywell, in conjunction with NASA Ames, has developed and demonstrated through simulation the feasibility of maximally passive systems for obstacle detection and avoidance. Such systems rely on techniques of passive ranging such as motion analysis and binocular stereo to perform their function through the use of passive sensor imagery. Honeywell's current efforts in passive ranging-based obstacle detection systems involves the real-time implementation of the motion analysis component of such systems. The real-time implementation within a Honeywell flexible testbed environment is the subject of this paper. An overview of the motion analysis algorithm is provided and the issues involved in its real-time implementation are addressed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The computer vision literature describes many methods to perform obstacle detection and avoidance for autonomous or semi-autonomous vehicles. Methods may be broadly categorized into field-based techniques and feature-based techniques. Field-based techniques have the advantage of regular computational structure at every pixel throughout the image plane. Feature-based techniques are much more data driven in that computational complexity increases dramatically in regions of the image populated by features. It is widely believed that to run computer vision algorithms in real time a parallel architecture is necessary. Field-based techniques lend themselves to easy parallelization due to their regular computational needs. However, we have found that field-based methods are sensitive to noise and have traditionally been difficult to generalize to arbitrary vehicle motion. Therefore, we have sought techniques to parallelize feature-based methods. This paper describes the computational needs of a parallel feature-based range-estimation method developed by NASA Ames. Issues of processing-element performance, load balancing, and data-flow bandwidth are addressed along with a performance review of two architectures on which the feature-based method has been implemented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Mission designs to Mars as well as return missions to the Moon usually include initial unmanned exploration to obtain geological and environmental data necessary for future manned programs. An important component of these unmanned exploration missions is the autonomous or remotely operated planetary rover -- a wheeled and/or legged vehicle containing several data collection and processing units. Further manned missions will also include such vehicles to assist humans with exploration and in supporting life science functions. The planetary rover must satisfy several mission requirements including surveying the terrain, preparing the landing sites, loading and unloading components for base operations, and aiding in recovery of in-situ materials. As such, the design of such rovers require the synergy of several vehicle functions which must operate in a coordinated fashion. This paper presents some of the design issues and vehicle requirements, particularly those requiring sensor information, that need be addressed for this important planetary surface system. Further based on some of the designs presently being investigated, the issues of system sensor fusion and standardization of interfaces are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Adaptive and intelligent multisensor perception is a characteristic that will be needed for space robotics and automation systems in order to improve productivity and flexibility. However, one of the major technical difficulties is related to illumination conditions in space. Space robotic systems, whether autonomous or not, will have to evolve and operate in a wide variety of illumination conditions within their mission: night, deep shadows, high illumination, or specularities. These robotic systems will also have to perceive and recognize the reflectance and emittance properties of a wide variety of rough surfaces. The purpose of our current research is to study a multisensor perception system that will be able: (1) to adapt the sensing strategy to lighting conditions, and (2) to allow for the geometrical and physical analysis of the surface properties on the scene.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new sensor planning paradigm composed of parametric and structural sensor planning is presented. A hierarchically distributed perception net (HDPN) is introduced to represent a sensing architecture. The parametric planning achieves the desired accuracy of HDPN outputs by iteratively modifying the sensing parameters, whereas the structural planning configures an optimal HDPN by self-organizing redundant sensing. This paper provides a general, yet formal and efficient, method of representing and solving a sensor planning problem for an integrated sensor system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multi-sensor data fusion technology is rapidly maturing. In recent years, numerous algorithms and techniques have been introduced or applied ranging from classic estimation and statistical methods (e.g., Bayesian inference), and pattern recognition methods, to heuristic techniques such as expert systems and templating techniques. Because of the range and variety of techniques, Hall and Linn (1990) developed a taxonomy which maps data fusion functions to classes of algorithms and to specific techniques. This paper describes a survey of commercial software tools applicable to multi-sensor data fusion. The paper identifies specific software programs, summarizes the required computing resources (hardware and software environment), and identifies the source of the software. The survey maps the computer software to the data fusion taxonomy, establishing the relationships between data fusion levels, functions, and algorithms. The intent of this paper is to allow the data fusion community to readily access these software building blocks without reinvention. Thus, building new data fusion systems may be performed with significant use of commercial off the shelf software.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a new method using the Bayesian approach and a Markov random field (MRF) model of integrating several low level visual modules. Using this approach, we have shown how results from an edge based stereo module can be integrated with an intensity based stereo algorithm. In another example, results from a shape from shading module are combined with intensity based stereo. We first derive the intensity based stereo algorithm using the MRF model. The integration is then performed by coupling the results from other modules to the energy functional of the MRF associated with the intensity based stereo. The maximum a prosteriori (MAP) estimate of the resulting MRF is obtained using the mean field annealing algorithm. Results from real and artificial images show a consistent improvement in the accuracy after integration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The main goal of our research in sensory data fusion (SDF) is the development of a systematic approach (a methodology) to designing systems for interpreting sensory information and for reasoning about the situation based upon this information and upon available data bases and knowledge bases. To achieve such a goal, two kinds of subgoals have been set: (1) develop a theoretical framework in which rational design/implementation decisions can be made, and (2) design a prototype SDF system along the lines of the framework. Our initial design of the framework has been described in our previous papers. In this paper we concentrate on the model-theoretic aspects of this framework. We postulate that data are embedded in data models, and information processing mechanisms are embedded in model operators. The paper is devoted to analyzing the classes of model operators and their significance in SDF. We investigate transformation abstraction and fusion operators. A prototype SDF system, fusing data from range and intensity sensors, is presented, exemplifying the structures introduced. Our framework is justified by the fact that it provides modularity, traceability of information flow, and a basis for a specification language for SDF.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper concerns a part of a general study on multisensor radar/IR tracking where the problem of track initiation in noisy environment is emphasized. A first approach to this problem, more precisely the consideration of uncertain probabilistic models via a multiple hypothesis filter (MHF), has been presented in a previous publication. For this purpose the theory of evidence has been used. In this article, a general view of the evaluation environment is described, in order to evaluate correctly the performances of the extension of MHF and to compare it with the standard MHF and with another tracking method. In a second part, a first multisensor approach is introduced. Different problems are shown.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An algorithm is presented for tracking a landing aircraft using two different passive sensors, a laser range finder and an infrared camera. The main feature of this algorithm is that it is able to identify and compensate for abrupt disturbances. The algorithm is based on the extended Kalman filter (EKF) and the filtering confidence function (FCF) which introduces a learning approach to the tracking problem. The results of simulation using this learning tracking algorithm and the extended Kalman filter alone are presented and compared.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Often one is interested in multisensor fusion to enhance the recognition of critical targets -- even though in isolation none of the sensors can supply sufficient information for detection. To recognize under such adverse conditions will require the best of techniques, e.g., Bayesian. Previously through careful target and sensor phenomenological modeling, we have overcome the main objection to single sensor Bayesian automatic target detection, i.e., the rigorous development of the necessary target probabilities. In this paper we show that one can further use a process of conditioning on target and sensor phenomenology to conditionally decouple the sensors. Optimal fusion will then proceed simply by combining the conditionally independent target probabilities arising from the individual sensors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The extravehicular activity helper/retriever (EVAHR) is a robot currently being developed by the Automation and Robotics Division at the NASA Johnson Space Center to support activities in the neighborhood of Space Station Freedom. The EVAHR's primary responsibilities are to retrieve free flying objects or to perform extravehicular activities in cooperation with crew members. The stated responsibilities could never be fulfilled without a robust and versatile computer vision system. This paper presents a preliminary design of the EVAHR's vision system and its initial implementation. The preliminary design consists of a vision system planner, and many sub-modules for performing various vision functions. Top-down and bottom up approaches have been taken for initial implementation. While the top-down approach focuses on laying out the framework of the EVAHR's vision system planner, the bottom-up approach emphasizes building up computation skills such as search, tracking, and pose estimation. Experimental results of the initial implementation are included in the paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This research focuses on data and conceptual enhancement algorithms. To be useful in many real-world applications, e.g., autonomous or teleoperated robotics, real-time feedback is critical. Unfortunately, many multi-sensor integration (MSI)/image processing algorithms require significant processing time. The basic direction of this research is the potentially faster and more robust formation of `clusters from pixels' rather than the slower process of extracting `clusters from images.' Techniques are evaluated on actual multi-modal sensor data obtained from a laser range camera, i.e., range and reflectance images. A suite of over thirty conceptual enhancement techniques are developed, evaluated, and compared on this sensor domain. The overall result is a general-purpose, MSI conceptual enhancement approach which can be efficiently implemented and used to supply input to a variety of high-level processes, including: object recognition, path planning, and object avoidance systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An experiment in fusion of real and logical sensors derived from range and intensity images is described. The real sensor used is an active triangulation laser range finder that has programmable speed and region-of-interest scanning and produces range and intensity return images. The logical sensors, which provide intensity edges, jump edges, convex and concave fold edges, and surface curvature derived from these images, are discussed. Theoretical and experimental means of characterizing these sensors and building a satisfactory sensor model are described. The characterization is made both in terms of measurement accuracy and range dependency of the logical sensors on intensity of the returned laser light. The use of the sensor and its model for edge based segmentation is described and evaluated with several test images. Each of the categories of edges is independently detected using the sensor model. The decision based upon the different logical sensors is the result of fusion using dependency rules about the sensor interactions. The result is an edge map with labeled edges. A rule based approach to closing the gaps in this edge map is described and the final segmentation is obtained. It is shown that the fusion helps in finding and localizing correct edges, as well as removing spurious edges.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we present results of a study of registered range and reflectance images acquired using a prototype amplitude-modulated cw laser radar. Ranging devices such as laser radars represent new technologies which are being applied in aerospace, nuclear and other hazardous environments where remote inspections, 3D identifications, and measurements are required. However, data acquired using devices of this type may contain non-stationary, signal- dependent noise, range-reflectance crosstalk, and low-reflectance range artifacts. Low level fusion algorithms play an essential role in achieving reliable performance by handling the complex noise, systematic errors, and artifacts. The objective of our study is the development of a stochastic fusion algorithm which takes as its input the registered image pair and produces as its output a reliable description of the underlying physical scene in terms of locally smooth surfaces separated by well-defined depth discontinuities. To construct the algorithm we model each image as a set of coupled Markov random fields representing pixel and several orders of line processes. Within this framework we (i) impose local smoothness constraints, introducing a simple linearity property in place of the usual sums over clique potentials; (ii) fuse the range and reflectance images through line process couplings; and (iii) use nonstationary, signal- dependent variances, adaptive thresholding, and a form of Markov natural selection. We show that the resulting algorithm yields reliable results even in worst-case scenarios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a method for multiple-view range image fusion is presented. The method is based on partial geometric modeling for each acquired range image and on model updating for image fusion to obtain a complete 3D geometric description of objects. The presented approach conducts partial modeling and range image processing in a device frame to reduce the computational complexity associated with handling range data in a Cartesian frame. Each partial model is mapped into a global Cartesian frame, and is integrated with the fusion model obtained from previous partial models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Past research into multi-modality sensor data fusion has given rise to approaches that can be characterized as heuristical and ad hoc. In this paper we propose a framework for fusing different modalities of registered sensory inputs at the data level by using the calculus of variations. The result is a mathematically rigorous method for improving the data quality, which can subsequently be utilized for fusion at the higher levels. We demonstrate this approach on the problem of estimating the three dimensional scene via the use of simulated noisy range and intensity images. The results indicate that a significant improvement over shape estimation via either shape from shading or shape from ranging alone can be achieved. The applicability of our fusion paradigm to other fusion problems is also discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The present work outlines a design strategy whereby 2D intensity and 3D range data are fused in order to yield higher accuracy range sensing. The proposed method can be applied to a wide variety of range finder systems with minor or no changes in the procedure used for raw data acquisition. The approach exploits a modified shape-from-shading algorithm which allows the inclusion of range data. The algorithm has been mapped on an SIMD parallel computer with 2048 processors. Since this is a data parallel problem needing only local communications, the efficiency of the execution is quite high and we have observed speed gains of sixty compared to a SunSPARC. The precision improvement obtained for the range image has been evaluated for a variety of 3D scenes, and is typically better than 2 bits.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Binocular stereo, vergence stereo, and depth from focus have been extensively studied in isolation in the past for extraction of depth. These passive approaches have their own strengths and limitations. All these cues differ in their input image requirements, computational requirements, and the quality and nature of results they provide. None of them by themselves is sufficient to reliably extract depth information from a wide variety of scenes. It is evident an integrated system which employs multiple cues and exploits their strengths would be most critical for developing a robust depth extraction system. Active vision paradigm allows us to develop such a system where various depth cues cooperate to derive the 3-D structure of the scene. In this, the integration would be accomplished by active intelligent control of the acquisition processes tightly coupled to the analysis of image data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The National Aeronautics and Space Administration (NASA), other government agencies, and private industry have requirements to map and analyze 3-dimensional surfaces of varying regularity and material composition. This requires a high fidelity, 3-dimensional description of the work space. In cases where complete and current information about the space is not available, topographic characterization allows on-line initialization and/or modification of the work space database. The mapping environment is often challenging with regard to lighting, radiation, temperature, atmosphere, and causticity. This paper describes a system that provides topographic characterization based on fusing intensity and depth information, and describes the application of this technique for inspection of Shuttle thermal tiles.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An approach for synthesizing laser radar (ladar) images based on three dimensional object models is described. The method is developed within the framework of a volume surface octree-based multi-sensor image generation scheme that also generates visual and thermal imagery. The octree-based approach provides a unified basis for modeling 3-D objects. The octree model of an object is an efficient three dimensional representation, and it is easily constructed from multiple silhouettes of the object. It is shown that the volume surface octree representation is well suited to the generation of laser radar range and reflectance imagery. An appropriate statistical model is used with this model to simulate speckle noise, which allows the synthesis of realistic imagery. Examples of the images produced by this scheme are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.