When a 2D binary image, e.g., the edge-detected boundary of a feature E1 (e.g., the left eye) in a digital image IM1 (e.g., the digital facial picture of a person P1), is allowed to vary between two extreme boundary curves C11 and C12 (e.g., C11 is the boundary curve of the left eye of P1 when he is wearing a serious facial expression, and C12 is that of P1 when he is wearing a beamingly smile expression), we can then select the center of mass (CM) points of the curves and obtain two radial functions to represent these two curves: R11 = f11(theta), R12=f12(theta) where R11, R12, are the radial coordinates of C11, C12 at angle theta with the (polar) coordinates centered at the CM point. Similarly, for a similar IM2 (of a person P2), we can obtain similar boundary curves R21=f21(theta), and R22=f22(theta) (for the left eye of P2 in the two extreme facial expressions). If we sample the analog data R11, R12, R21, R22 at 36 different theta angles with 10-degree increments between adjacent angles, then we will obtain 4 ANALOG vectors of 36-dimensions each. The mapping of the extreme boundaries R11, R12 to +1 (for identifying P1) and R21, R22 to -1 (for identifying P2) can then be used as the learning map required in the design of a neural network for carrying out this task of a precision, automatic identification of two very similar patterns. This paper reports the theory and the experiment of a novel, non-iterative, neural network that will learn this extreme-boundary mapping derived from any two VERY SIMILAR patterns. It will learn not only the static features of each pattern, but also the dynamic variation ranges of the features in each pattern. After the learning, it will then AUTOMATICALLY identify any UNTRAINED pattern (which is ANY gradual change between the two extremes boundaries) as belonging to IM1 or IM2. The neural network DOES NOT have to learn any of these gradual changes one by one.