Screening lung cancer by computed tomography (CT) has shown great benefit for early cancer detection, but requires a great effort to eliminate the associated false detection, where the biopsy option costs most among other eliminating options. Therefore it is significant to study lung cancer through image analysis to decrease biopsy tests. However, it is extremely difficult to get enough data with biopsy reports from hospital for machine learning study in a short period. So this study aims to explore machine transfer learning innovations to predict unnecessary biopsies from a very small dataset of pathologically proven nodule CT images. To overcome the problem of big data requirement of the CNN architecture (such as VGG used in this study), we used the parameters trained by ImageNet as the initial features. Then we put part of the labeled pulmonary nodule dataset with the ground truth into the training dataset to fine-tune the parameters of different architectures. Fifty repetitions of the cross validation method of two-thirds training and one-third testing are used to measure the efficiency of different deep transfer learning architectures. Through the classification results shown in ROC curves and AUC values, we find that deep features transferred from natural images can enhance 0.1663 more than the traditional machine learning method based on texture features extracted from gray images directly. And our improved VGG architecture with 8 layers for achieving less-abstractive features can obtain 0.1081 better performance than the more-abstractive ones on the recognition of malignant nodules.