The goal of interactive search-assisted diagnosis (ISAD) is to enable doctors to make more informed decisions about a given case by providing a selection of similar annotated cases. For instance, a radiologist examining a suspicious mass could study labeled mammograms with similar conditions and weigh the outcome of their biopsy results before determining whether to recommend a biopsy. The fundamental challenge in developing ISAD systems is the identification of similar cases, not simply in terms of superficial image characteristics, but in a medically-relevant sense. This task involves three aspects: extraction of a representative set of features, identifying an appropriate measure of similarity in the high-dimensional feature space, and return the most similar matches at interactive speed. The first has been an active research area for several decades. The second has largely been ignored by the medical imaging community. The third can be achieved using the Diamond framework, an open-source platform that enables efficient exploration of large distributed complex data repositories. This paper focuses on the second aspect. We show that the choice of distance metric affects the accuracy of an ISAD system and that machine learning enables the construction of effective domain-specific distance metrics. In the learned distance, data points with the same labels (e.g., malignant masses) are closer than data points with different labels (e.g., malignant vs. benign). Thus, the labels of the near neighbors of a new case are likely to be informative. We present and evaluate several novel methods for distance metric learning and evaluate them on a database involving 2522 mass regions of interest (ROI) extracted from digital mammograms, with ground truth defined by biopsy results (1800 malignant, 722 benign). Our results show that learned distance metrics improve both classification (ROC curve) and retrieval performance.