We present a solution to a common problem in industrial machine vision, to identify and estimate the orientation of touching mechanical parts on a plane surface. The algorithm is based on watershed segmentation and can handle cases where objects touch. After an initial thresholding step, we extract the edges of the binary image, the outer edge as well as edges around holes inside the object. Then we use a distance transformation to create a distance map, i.e., an image where each pixel value represents the distance to the nearest edge pixel. The watershed algorithm is applied on the distance map and we get an image where some objects may be segmented into several parts. For every segment we calculate the center of gravity for its surrounding edge pixels. The different centers of gravity are enough for estimating the orientation of objects that have been segmented into more than one segment. By also calculating the center of gravity for holes of the object and using them in the same way we can estimate the orientation of objects having holes. To recognize the mechanical parts we use the distances between the center of gravity of its segments and holes together with the greatest maximum of the distance map that we find inside each of them. We also calculate the length of the peripheries of the segments and use them to distinguish the objects. We can perhaps recognize, and certainly locate, but not estimate the rotation of the mechanical parts that consist of only one segment without holes. For those objects we construct a circle around the center of gravity with the corresponding greatest maximum as radius. We collect the values of the distance map on this circle line and plot them as a function of the angle to the horizontal axis. We can identify the maxima and minima of this function, from which we estimate the rotation of the object. This information can also be used to identify the object. For overall control algorithm we are using fuzzy logics. As a final step to verify the identification of the mechanical parts and to get a better estimation of the orientation of the objects, we do an edge matching using the distance map which gives us quantitative measurements of how well the edges match. This gives us more accurate estimates than can be achieved by statistical methods.