In an Adaptive Image Retrieval System (AIRS) the user-system interaction is built through an interface that allows the relevance feedback process to take place. Most existing image retrieval systems simply display the result list of images (or their thumbnails) to the user in a 2D grid, without including any information about the relationships between images. In this context, we propose a new interactive multiple views interface for our AIRS, in which each view illustrates these relations by using visual attributes (colors, shapes, proximities). We identify two types of users for an AIRS: a user who seeks images whom we refer to as an end-user, and a user who designs and researches the collection and the retrieval systems whom we refer to as a researcher-user. With such views, the interface allows user (end-user or researcher-user) more effective interaction with the system by seeing more information about the request sent to the system as well as, by better understanding of the results, how to refine the query iteratively. Our qualitative evaluation of these multiple views in AIRS shows that each view has its own limitations and benefits. However, together, the views offer complementary information that helps user in improving his or her search effectiveness.