Deep learning models based on Convolutional Neural Networks (CNN) are known as successful tools in many classification and segmentation studies. Although these kinds of tools can achieve impressive performance, we still lack effective means to interpret the models, features, and the associated input data on how a model can work well in a data-driven manner. In this paper, we propose a novel investigation to interpret a deep-learningbased model for breast cancer risk prediction using screening digital mammogram images. First, we build a CNN-based risk prediction model by using normal screening mammogram images. Then we developed two different/separate schemes to explore the interpretability. In Scheme 1, we apply a sliding window-based approach to modify the input images; that is, we only keep the sub-regional imaging data inside the sliding window but padding other regions with zeros, and we observe how such an effective sub-regional input may lead to changes in the model’s performance. We generated heatmaps of the AUCs with regards to all sliding windows and showed that the heatmaps can help interpret a potential correlation/response between given sliding windows and the model AUC variation. In Scheme 2, we followed the saliency map-based approach to create a Contribution Map (CM), where the CM value of each pixel reflects the strength of that pixels contributes to the prediction of the output label. Then over a CM, we identify a bounding box around the most informative sub-area of a CM to interpret the corresponding sub-area in the images as the region that is most predictive of the risk. This preliminary study demonstrates a proof of concept on developing an effective means to interpret deep learning CNN models.