Novel edge detection and line-fitting machine vision algorithms are applied for linewidth measurement on optical images of integrated circuits. The techniques are used to achieve subpixel resolution. The strategy employs a two-step procedure. In the first step, a neural network is used for edge detection ofthe image. Three neural network approaches are investigated: self-organizing, bootstrap linear threshold, and constrained maximization strategies. The weights of the neural networks are estimated using unsupervised learning procedures, the advantage of which is the ability to adapt to the imaging environment. Consequently, these proposed neural network approaches for edge detection do not require an a priori data base of images with known linewidths for calibration. In the second step, line-fitting methods are applied to the edge maps defined by the neural network to compute linewidth. Two methods are investigated: the Hough transform method and an eigenvector strategy. By employing this two-step strategy, the entire image is used to estimate linewidth as opposed to the use of just a single or a few line scans. Thus, edge roughness effects can be spatially averaged to obtain an optimal estimate of linewidth, and subpixel resolution can be achieved. However, the accuracy (or variance) of this estimate will, of course, be dependent on issues such as pixel size and the capability of the imaging system. The techniques are general and can be used on images from a variety of microscopes, including optical and electron-beam microscopes.