Protecting panels in substations ensure the stable operation of electrical equipment, which is of great significant to the local electrical power system. In order to reduce the manual intervention and improve the efficiency, we propose a vision-based status recognition method for protecting panels which can be used in automatic inspection equipment. The approach is divided into three stages: pre-processing, switch localization and status recognition. During the preprocessing, the image is firstly wrapped into the front view with inverse perspective transformation (IPM), with the help of a set of four artificial auxiliary marks. Meanwhile, a ROI region is extracted from the wrapped image which discards most of the trivial context. Secondly, the gradient intensity feature is calculated to locate switches on a panel. After projecting the gradient intensity image horizontally and vertically, the distribution of the switches is determined by respectively analyzing the both directional projection curves. Finally, a SVM classifier is trained to recognize the status of the switches on a protecting panel. The input of the classifier is the gradient orientation feature extracted from a normalized single switch region, and the output is the connected or disconnected states of the switch. Experiments show that our approach has low time consumption and achieves an accurate recognition rate of 99%.
Estimating head pose of pedestrians is a crucial task in autonomous driving system. It plays a significant role in many research fields, such as pedestrian intention judgment and human-vehicle interaction, etc. While most of the current studies focus on driver’s-view images, we reckon that surveillant images are also worthy of attention since more global information can be obtained from them than driver’s-view images. In this paper, we propose a method for head pose estimation from surveillant images. This approach consists of two stages, head detection and pose estimation. Since the head of pedestrian takes up a very small number of pixels in a surveillant image, a two-step strategy is used to improve the performance in head detection. Firstly, we train a model to extract body region from the source image. Secondly, a head detector is trained to locate head position from the extracted body regions. We use YOLOv3 as our detection network for both body and head detection. For head pose estimation, we treat it as classification task of 10 categories. We use ResNet-50 as the backbone of the classifier, of which the input is the result of head detection. A serial of experiments demonstrate the good performance of our proposed method.