Currently, most mapping tasks are carried out using remote sensing data such as satellite imageries and LIDAR point clouds. This paper presents the integration of a QuickBird imagery set (both pan and multispectral) and LIDAR DEM generated from a LIDAR point cloud for mapping the coastline. A number of image processing techniques were applied to pan image to generate a coastline. Then a supervised classification is performed on the multispectral image followed by a raster to vector conversion to extract another shoreline. A third line was created from the LIDAR data using a set of processing algorithms. The three lines are weighted and pixels belonging to all of them are grouped to fit a final coastline. In order to evaluate the results, we manually extracted the corresponding line from the pan image and compared points belonging to both lines. Differences averaged about 1.37 meters.
The quick advance in remote sensing technologies established the potential to gather accurate and reliable information about the Earth surface using high resolution satellite images. Remote sensing satellite images of less than one-meter pixel size are currently used in large-scale mapping. Rigorous photogrammetric equations are usually used to describe the relationship between the image coordinates and ground coordinates. These equations require the knowledge of the exterior and interior orientation parameters of the image that might not be available. On the other hand, the parallel projection transformation could be used to represent the mathematical relationship between the image-space and objectspace coordinate systems and provides the required accuracy for large-scale mapping using fewer ground control features. This article investigates the differences between point-based and line-based parallel projection transformation models in rectifying satellite images with different resolutions. The point-based parallel projection transformation model and its extended form are presented and the corresponding line-based forms are developed. Results showed that the RMS computed using the point- or line-based transformation models are equivalent and satisfy the requirement for large-scale mapping. The differences between the transformation parameters computed using the point- and line-based transformation models are insignificant. The results showed high correlation between the differences in the ground elevation and the RMS.
Recently, developments in airborne sensors and easy to fly, reliable, low-cost commercial Unmanned Aerial Vehicles,
UAVs, have opened a new era for high quality and reliable mapping from UAVs using remote sensing techniques. The
restricted payload capacity of low-cost UAVs imposes constraints on the quality of their navigation systems and the sensors
they can carry. Therefore, LIDAR sensors with limited sample rate are utilized within the UAV system. This article
introduces several applications that utilizes UAV-LIDAR systems, processing of a sample dataset downloaded from the
internet and a new system that is being developed and flown. Our data was collected with DJI S900 Hexacopter and a
VLP-16 LIDAR system from Velodyne. We then process the raw data to generate the 3D point cloud. The test site is a
farming site so we classified the points into ground points and vegetation points. The results are very promising as an early
research investigation. Currently, we are planning for other flights with more rigorous systems and quantitative evaluation.
The last three Apollo lunar missions (15, 16, and 17) carried an integrated photogrammetric mapping system of a Metric Camera (MC), a high-resolution Panoramic Camera, a Star Camera, and a Laser Altimeter. Recently images taken by the MC were scanned by Arizona State University (ASU); these images contain valuable information for scientific exploration, engineering analysis, and visualization of the moon’s surface. Through this article, we took advantage of the large overlaps, the multi viewing, and the high ground resolution of the images taken by the Apollo MC in generating an accurate and trustful surface of the Moon. After computing the positions and orientations of the exposure stations, through rigorous a photogrammetric bundle adjustment techniques, we carried out a two-step matching process that contains hierarchical matching and least squares matching; both steps are implemented in a multi-photo scheme. Our results look very promising and demonstrate the effectiveness of the proposed algorithm in accounting for depth discontinuities, occlusions, and noises.