The differentiating characteristics of text versus images and their impact on large medical image databases intended to allow content-based indexing and retrieval have recently been explored. For the design of powerful user interfaces, we propose a grouping of the various mechanisms into four classes: (i) output modules, (ii) parameter modules, (iii) transaction modules, and (iv) process modules, all of which being controlled by a detailed query logging. Relevance feedback by the physician loops the input, search, and output sequence and is commonly accepted to be most effective for query refinement. Our modular concept provides two additional loops of interaction. Based on the detailed logging of user interaction, an inner loop allows to step back to any previous answer that was given by the system during the interactive session. Boolean linkage of successive queries is provided by an outer loop. Nonetheless, the entire data flow is still controlled within a single web page by means of simple decision rules that are implemented only using push buttons, which can be handled most easily. Our approach is exemplified by means of an application for content-based access to medical images of similar modality, orientation, and body region using global features that model gray scale, texture, structure and global shape characteristics. The three nested loops for interaction provide a maximum of flexibility within a minimum of complexity. The resulting extended query refinement has a significant impact for content-based retrieval in medical applications (IRMA).