Recent advances have shown sensor-fusion’s vital role in accurate detection, especially for advanced driver assistance systems. We introduce a novel procedure for depth upsampling and sensor-fusion that together lead to an improved detection performance, compared to state-of-the-art results for detecting cars. Upsampling is generally based on combining data from an image to compensate for the low resolution of a LiDAR (Light Detector and Ranging). This paper, on the other hand, presents a framework to obtain dense depth map solely from a single LiDAR point cloud that makes it possible to use just one deep network for both LiDAR and image modalities. The produced full-depth map is added to the grayscale version of the image to produce a two-channel input for a deep neural network. The simple preprocessing structure is efficiently competent in filing cars’ shapes, which helps the fusion framework to outperforms the state-of-the-art on the KITTI object detection for the Car class.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.