Translator Disclaimer
8 November 2018 A parallel matching algorithm for archaeological fragment object designs based on Hausdorff distance
Author Affiliations +
Cultural relic objects got from archaeology are often incomplete fragments. Some old and classical designs are preserved in these objects’ surfaces. But the objects are generally incomplete and their design is only the partial of the full design. Since the sherds or fragment objects suffer from serious corrosion, the fragment designs are often obscure and do not enough to cover all complete designs. Obviously, manual matching is inefficient and practically impossible for some complicated partial to global matches. This paper presents a feature matching algorithm to overcome these difficulties. Firstly, the color image of sherd object is converted to gray image. Then we detect the edges and the feature curves of the design and get one-pixel-wide edges and curves. Gray-scale image will be enhanced and removed noises. Some evident missing curves will be added to and incorrect curves will be removed manually. The fast matching algorithm is used to exclude the impossible matching designs in the design image database. For any possible matching designs, we use the parallel image matching algorithm based on the Hausdorff distance. The matching process consists a translation and a rotation transform. We divide the translation t into a number of subsets which will be assigned to different processors to compute any rotation Hausdorff distance and match the image of fragment object. All processors will stop when any processor with successful matching result. The experiment on a set of fragment designs shows that the algorithm is efficient and better than traditional matching method.
© (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Rongteng Wu "A parallel matching algorithm for archaeological fragment object designs based on Hausdorff distance", Proc. SPIE 10817, Optoelectronic Imaging and Multimedia Technology V, 1081713 (8 November 2018);

Back to Top