For Visually Impaired People (VIP), it’s very difficult to perceive their surroundings. To address this problem, we propose a scene understanding system to aid VIP in indoor and outdoor environments. Semantic segmentation performance is generally sensitive to the environment and illumination changes, including the change between indoor and outdoor environments and the change across different weather conditions. Meanwhile, most existing methods have paid more attention on either the accuracy or the efficiency, instead of the balance between both of them. In the proposed system, the training dataset is preprocessed by using an illumination-invariant transformation to weaken the impact of illumination changes and improve the robustness of the semantic segmentation network. Regarding the structure of semantic segmentation network, the lightweight networks such as MobileNetV2 and ShuffleNet V2 are employed as the backbone of DeepLabv3+ to improve the accuracy with little increasing of computation, which is suitable for mobile assistance device. We evaluate the robustness of the segmentation model across different environments on the Gardens Point Walking dataset, and demonstrate the extremely positive effect of the illumination-invariant pre-transformation in challenging real-world domain. The network trained on computer achieves a relatively high accuracy on ADE20K relabeled into 20 classes. The frame rate of the proposed system is up to 83 FPS on a 1080Ti GPU.