Several low-level vision algorithms have been implemented on a 16-node hypercube processor (AMETEK 5-14) by exploitation of its network embedding feature. This includes edge detection with the Sobel operator, histogramming, one-pass parallel binary image thinning, and noise cleaning. The primary objective is to parallelize these algorithms by achieving a proper image-to-processor topology mapping and to determine the actual speedup factor of parallel implementation over the sequential
programming. Two basic topologies used are the ring and the nearest-neighbor networks, which are mapped onto the hypercube system. Several 512 x 512 gray-level images have been processed concurrently.
A tenfold improvement in the speedup has been obtained compared to the sequential implementation in a single processor of the concurrent system. This result is obtained by ignoring the host-to-node, node-to-host, and I/O communications.