It is necessary to obtain the surrounding information by the sensor to realize target identification and distance detection in the unmanned situational awareness task. The sensors used to collect target information can be divided into visible light band sensors such as visible light cameras and non-visible light wavelength sensors such as Light Detection and Ranging (LiDAR) from the optical characteristics. The LiDAR focuses on acquiring target distance and reflection intensity information, but the details are limited and not conducive to human observation. The visible light cameras focus on acquiring well-structured and detailed target information, but it is difficult to obtain distance information. In this paper, a target recognition method based on visible-depth image fusion for heterogeneous sensors is studied to solve the influence of environmental, spatial and heterogeneous data factors on fusion results. The fusion of point cloud images and visible light image data into visual images incorporating distance information is achieved by employing a combination of the RANSAC algorithm-based spatial line intersection method and the least squares method. And target recognition and distance detection methods based on yolov5s algorithm were studied for fusion data. Through the establishment of dataset and the training of deep learning network, the specific target identification and distance determination are realized. The results show that the specific target recognition rate of this method is greater than 94%, and the accuracy is 2mm (at 10m distance). The results of target recognition and distance determination can be directly sent to the unmanned situation awareness platform to provide a basis for its next decision.
Six-light-screen vertical target is an ideal equipment for the projectile flight parameter measurement of the rapid-fire weapon, the light-screen-array model of this kind of equipment is mainly divided into double V shaped and double N shaped, which build different light-screen structure in the space respectively. By recording the time that the projectile reaching each light-screen, and combining with the known spatial structure of the lightscreen array, the flight parameter of projectiles can be measured. Due to the measuring formula is determined by the light-screen-array model, the error influencing factors are considered under different model, and the influence of each factors were analyzed in the selected target plane, respectively. The error distribution were compared under the same condition of each error influencing factors. Then the combined error are calculated and the combined error distribution in the 1m×1m target plane were estimated. The research can provide a useful reference for error analysis in practice, and provide new ideas for improving the measurement precision of rapid-fire weapons.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.