Rosacea is a common cutaneous disorder characterized by facial redness, swelling, and flushing, and it is usually diagnosed by a dermatologist after a visual examination. Qualitative human assessment often results in relatively high intra- and interobserver variability, which can negatively affect patient outcomes. Computer-assisted image analysis may improve visual assessment by human observers because it enables quantitative, consistent, and accurate analysis. Here, we combine classical multidimensional scaling (MDS) with deep convolutional neural networks (CNNs) to create an efficient framework to identify rosacea lesions. MDS is utilized to determine an appropriate amount of training data, which are used to train Inception-ResNet-v2 to classify facial images into rosacea and non-rosacea regions. Using a leave-one-patient-out cross-validation scheme with 128 × 128 non-overlapping image patches, the method resulted in a class weighted average Dice coefficient (DC) of 82.1% ± 2.4% and accuracy of 85.0% ± 0.6%. While this average performance is almost identical to our previous results (81.7% ± 2.7% and 84.9% ± 0.6% for DC and accuracy, respectively), with the new scheme, we use approximately 90% less data to train the system. We also report the results of quantitative experiments with overlapping patches with a stride of 50 pixels. With the same experimental setup, speedups of 25.6 times (128 × 128), 23.4 times (192 × 192), and 23.2 times (256 × 256) have been observed when the network is trained with the entire training data as the baseline. The class weighted average DC for this experiment with the proposed method is 83.9% ± 2.1% as in the case of 192 × 192 pixels overlapping patches, while it is 84.4% ± 2.2% when the entire set is trained at each fold. We conclude that the proposed method can be an efficient way to train deep neural networks using only a small subset of the training data.