We are developing quantitative image analysis methods for early diagnosis of lung cancer. We designed a hybrid deep learning (H-DL) method for volume segmentation of lung nodules with large variations in size, shape, margin and opacity. In our H-DL method, two UNet++ based DL models, one used a 19-layer VGG network and the other used a 201-layer DenseNet network as backbone, were trained separately and then combined to segment nodules with wide ranges of size, shape, margin, and opacity. A data set collected from LIDC-IDRI containing 430 cases with lung nodules manually segmented by at least two radiologists was split into 352 training and 78 independent test cases. The 50% consensus consolidation of radiologists’ annotation was used as the reference standard for each nodule. For the 78 test cases with 167 nodules, our H-DL model achieved an average 3D DICE coefficient of 0.732±0.158 for all nodules. For the nodules with size larger than 9.5 mm, nodules with margin described by LIDC-IDRI as sharp or spiculated, or nodules with structure described as lobulated or having solid opacity, the segmentation accuracy achieved by our H-DL model were not significantly different from the average of radiologists’ manual annotation in terms of the DICE coefficient. The results demonstrated that our hybrid deep learning scheme could achieve high segmentation accuracy comparable to radiologists’ average segmentations for the wide variety of nodules.