A neural network pavement crack identification method combined with discreteness analysis is proposed. After grey transformation, image enhancement, the images are divided to two groups, one for training, the other one for test. The images in training group are divided into a series of sub blocks. The sub blocks contain cracks are taken as positive samples, and the sub blocks with shadows and normal roads are taken as negative samples. The two samples are used for extracting features, and the features are used to training model, and the model is used to recognize the crack in test group. For little error recognition points, a discreteness analysis was proposed to solve this problem. The contrast recognition of clean and shadowed pavement in gray value method and our method was carried out on asphalt and cement pavement respectively. Experimental result shows that the traditional gray value method is of little difference to neural network method combined with discreteness analysis in clean road, while big difference in shadow road.
To solve the occlusion problem in optical tracking system (OTS) for surgical navigation, this paper proposes a sensor fusion approach and an adaptive display method to handle cases where partial or total occlusion occurs. In the sensor fusion approach, the full 6D pose information provided by the optical tracker is used to estimate the bias of the inertial sensors when all of the markers are visible. When partial occlusion occurs, the optical system can track the position of at least one marker which can be combined with the orientation estimated from the inertial measurements to recover the full 6D pose information. When all the markers are invisible, the position tracking will be realized based on outputs of the Inertial Measurement Unit (IMU) which may generate increasing drifting error. To alert the user when the drifting error is great enough to influence the navigation, the images adaptive to the drifting error are displayed in the field of the user’s view. The experiments are performed with an augmented reality HMD which displays the AR images and the hybrid tracking system (HTS) which consists of an OTS and an IMU. Experimental result shows that with proposed sensor fusion approach the 6D pose of the head with respect to the reference frame can be estimated even under partial occlusion conditions. With the help of the proposed adaptive display method, the users can recover the scene of markers when the error is considered to be relatively high.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.