Content of an image can be expressed in terms of different features such as color, texture, shape, or text annotations. Retrieval based on these features can be various by the way how to combine the feature values. Most of the existing approaches assume a linear relationship between different features, and also require the user to directly assign weights to features. In particular, as the number of feature classes increases, intuition about how to pick relative weights among features is lost. While this linear combining approach establishes the basis of content-based image retrieval (CBIR), the usefulness of such systems was limited due to the difficulty in representing human perception subjectivity. In this paper, we introduce a Neural Network- based Image Retrieval system, a human-computer interaction approach to CBIR using Radial Basis Function network. This approach determines nonlinear relationship between features so that more accurate similarity comparison between images can be supported and allows the user to submit a coarse initial query and continuously refine his information need via relevance feedback. The experimental results show that the proposed approach has the superior retrieval performance than the existing approaches such as linearly combining approach, the rank-based method, and the BackPropagation- based method.