Region of interest (ROI) determination is a first and crucial step performed in an automatic target recognition (ATR) system. The goal of ROI determination is to identify candidate regions that may have potential targets. To be most effective, this initial detection (or focus of attention) stage must reject clutter (noise or countermeasures that provide target like characteristics), while ensuring that regions with true targets are not missed. We present a novel approach to ROI determination in synthetic aperture radar (SAR) images for ATR based on the premise that regions with targets would require a model with more free parameters to smoothly approximate the magnitude of the return. Toward that end, we use a sigmoidal multilayered feed-forward neural network with selected lateral connections between hidden layer neurons to approximate the return in disjoint square patches of the SAR image. This network probably uses as few neurons as possible to produce a desired approximation and thus enables the determination of the number of parameters used in approximating the return in an image patch. Those squares of the image that require a large number of neurons (more free parameters) are then labeled as ROIs. Results obtained with synthetic and real-world SAR images are used to demonstrate the effectiveness of the proposed method. A significant advantage of the proposed method is that it does not require the presence of a training data set, which, given the variability in SAR images and target signatures, is difficult to obtain.