For the road mark in the presence of abrasion, adhesion, and occlusion, it is difficult to accurately locate the road mark by typical point cloud clustering and segmentation method. An automatic road mark detection method based on local point cloud projection and 2D deep learning object detection is proposed. Firstly, the local ground point cloud regions are obtained by prior information; Secondly, we perform orthogonal projection on the point cloud to project the point cloud into two-dimensional image by affine transformation and use R-FCN detection method to detect the road mark in the image. The holes on the image are filtered by local maximum filtering. A line-by-line search strategy based on maximum reflection rate is proposed to refine the detected bounding boxes to improve the detection accuracy; Finally, we implement inverse affine transformation and local coordinate search to restore the road mark coordinates from twodimensional image to 3D point cloud. In the point cloud dataset collected from China highway, our experimental results show that the proposed method can achieve road mark detection under complex scenes such as occlusion, abrasion and adhesion. Comparing with other methods, the detection recall, accuracy and efficiency have been improved greatly.
Although recently eye-tracking method has been introduced into behavioral experiments based on dot-probe paradigm, some characteristics in eye-tracking data do not draw as much attention as traditional characteristics like reaction time. It is also necessary to associate eye-tracking data to characteristics of images shown in experiments. In this research, new variables, such as fixation length, times of fixation and times of eye movement, in eye-tracking data were extracted from a behavioral experiment based on dot probe paradigm. They were analyzed and compared to traditional reaction time. After the analysis of positive and negative scenery images, parameters such as hue frequency spectrum PAR (Peak to Average Ratio) were extracted and showed difference between negative and positive images. These parameters of emotional images could discriminate scenery images according to their emotions in an SVM classifier well. Besides, it was found that images’ hue frequency spectrum PAR is obviously relevant to eye-tracking statistics. When the dot was on the negative side, negative images’ hue frequency spectrum PAR and horizontal eye-jumps confirmed to hyperbolic distribution, while that of positive images was linear with horizontal eye-jumps. The result could help to explain the mechanism of human’s attention and boost the study in computer vision.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.