An eye state analysis (i.e., open or closed) is an important step in the detection of fatigue driving. In this paper, a weighted color difference matrix algorithm is proposed for analyzing a driver’s eye state. First, an image of the driver’s eye is obtained from a face detection database. Two feature images are then constructed for the eye image, which are each gray-scale normalized. The feature image is then projected into a block feature matrix, and the feature value is calculated to construct a feature vector. Finally, a support vector machine is used to train and classify the extracted eigenvectors, and the state of the driver’s eye is judged to further analyze the driver’s state of fatigue. To evaluate the performance of the proposed algorithm, experiments are conducted with several publicly available databases, showing that our algorithm is efficient and reasonable.
Eye and mouth state analysis is an important step in fatigue detection. An algorithm that analyzes the state of the eye and mouth by extracting contour features is proposed. First, the face area is detected in the acquired image database. Then, the eyes are located by an EyeMap algorithm through a clustering method to extract the sclera-fitting eye contour and calculate the contour aspect ratio. In addition, an effective algorithm is proposed to solve the problem of contour fitting when the human eye is affected by strabismus. Meanwhile, the value of chromatism s is defined in the RGB space, and the mouth is accurately located through lip segmentation. Based on the color difference of the lip, skin, and internal mouth, the internal mouth contour can be fitted to analyze the opening state of mouth; at the same time, another unique and effective yawning judgment mechanism is considered to determine whether the driver is tired. This paper is based on the three different databases to evaluate the performance of the proposed algorithm, and it does not need training with high calculation efficiency.
This paper aims to achieve robust behavior recognition of video object in complicated background. Features of the video object are described and modeled according to the depth information of three-dimensional video. Multi-dimensional eigen vector are constructed and used to process high-dimensional data. Stable object tracing in complex scenes can be achieved with multi-feature based behavior analysis, so as to obtain the motion trail. Subsequently, effective behavior recognition of video object is obtained according to the decision criteria. What’s more, the real-time of algorithms and accuracy of analysis are both improved greatly. The theory and method on the behavior analysis of video object in reality scenes put forward by this project have broad application prospect and important practical significance in the security, terrorism, military and many other fields.