Low dose CT screening has been shown to significantly reduce mortality rates due to lung cancer. To assist radiologists, CAD systems continue to be developed for automatically detecting, segmenting, and categorizing potentially malignant lung nodules. Deep learning with the U-Net architecture has shown to be effective for automatic segmentation of 2-D images. The network consists of a down-sampling and up-sampling path, similar to an auto-encoder. However, the concept driving its success is skip-connections between down-sampling and up-sampling, allowing the network to preserve details and for easier backpropagation of error to deep layers. This concept has previously been brought into 3-D and successfully applied to image volumes such as MRI and CT scans. This paper applies concepts from these works (skip connections, batch normalization, Dice similarity coefficient loss, and stride convolution for down sampling) to a 3-D Convolutional Neural Network for segmenting nodule image patches in thoracic CT scans from the LIDC-IDRI database. This database contains scans from 1018 cases with annotations from up to 4 human experts. Within each scan, nodules are delineated by each of these experts. It is well known that manual delineation can be subjective between annotators. This paper proposes a model trained on a ground truth estimation from the four expert annotations using the STAPLE algorithm. Experiments in this paper show that when trained on STAPLE, automatic segmentation with a 3-D U-Net can result in improved similarity scores to human annotators than similarity scores between human annotators.