Translator Disclaimer
19 September 2019 Semantic scene understanding on mobile device with illumination invariance for the visually impaired
Author Affiliations +
Abstract
For Visually Impaired People (VIP), it’s very difficult to perceive their surroundings. To address this problem, we propose a scene understanding system to aid VIP in indoor and outdoor environments. Semantic segmentation performance is generally sensitive to the environment and illumination changes, including the change between indoor and outdoor environments and the change across different weather conditions. Meanwhile, most existing methods have paid more attention on either the accuracy or the efficiency, instead of the balance between both of them. In the proposed system, the training dataset is preprocessed by using an illumination-invariant transformation to weaken the impact of illumination changes and improve the robustness of the semantic segmentation network. Regarding the structure of semantic segmentation network, the lightweight networks such as MobileNetV2 and ShuffleNet V2 are employed as the backbone of DeepLabv3+ to improve the accuracy with little increasing of computation, which is suitable for mobile assistance device. We evaluate the robustness of the segmentation model across different environments on the Gardens Point Walking dataset, and demonstrate the extremely positive effect of the illumination-invariant pre-transformation in challenging real-world domain. The network trained on computer achieves a relatively high accuracy on ADE20K relabeled into 20 classes. The frame rate of the proposed system is up to 83 FPS on a 1080Ti GPU.
© (2019) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Chengyou Xu, Kaiwei Wang, Kailun Yang, Ruiqi Cheng, and Jian Bai "Semantic scene understanding on mobile device with illumination invariance for the visually impaired", Proc. SPIE 11169, Artificial Intelligence and Machine Learning in Defense Applications, 111690Q (19 September 2019); https://doi.org/10.1117/12.2532550
PROCEEDINGS
9 PAGES


SHARE
Advertisement
Advertisement
Back to Top