Both color and depth information may be deployed to seek by content through RGB-D imagery. Previous works dealing with global descriptors for RGB-D images advocate a decision level merger in which color and depth representations, independently computed, are juxtaposed to pursue a search for similarities. Differently, we propose a “learning-to-rank” paradigm aimed at weighting the two information channels according to the specific traits of the task and data at hand, thereby effortlessly addressing the potential diversity across applications. In particular, we propose a method, referred to as “kNN-rank,” which can learn the regularities among the outputs yielded by similarity-based queries. Another contribution concerns the “HyperRGBD” framework, a set of tools conceived to enable seamless aggregation of existing RGB-D datasets to obtain data featuring desired peculiarities and cardinality.