Optical character recognition (OCR) algorithms typically start from a binary label image. The need for a binary image is complicated by the fact that most imaging devices usually produce multiply valued data: a grey scale image. The problem then becomes how to extract the meaningful character data from the grey scale image. Image artifacts such as dirt, variations in background intensity, and imaging noise complicate the character extraction. When inspecting packages moving on a conveyor belt, we have control over the optical parameters of the system. Via autofocus and controlled lighting, parameters such as the optical path length, field of view, and illumination intensity may be adjusted. However no control can be placed on labels. The label reading system is totally subject to the package sender’s whimsy. We describe the development of a recurrent neural network to segment grey scale label images into binary label images. To determine a pixel label, the neural network takes into account three sources of information: pixel intensities, correlations between neighboring labels, and edge gradients. These three sources of information are succinctly combined via the network’s energy function. By changing its label state to minimize the energy function, the network satisfies constraints imposed by the input image and the current label values. The network has no knowledge of shape. Information on what comprises a desirable shape is probably unwarranted at the earliest stage of image processing. Although significant image filtering could be performed by a network that knows what characters should look like, such knowledge is unavoidably font specific. Further there is the problem of teaching the network about shapes. The neural network does not need to be taught. Learning is typically extremely time consuming. To be mappable to analog hardware, it is desirable that the neural equations be deterministic. Two deterministic networks are developed and compared.