The feasibility of localizing, segmenting, and classifying individual cells in multi-channel confocal microscopy images is highly dependent on image quality. In certain applications, with good image quality, the segmentation can be trivial and accomplished via thresholding, watershed, or a collection of other well-established and studied heuristics; however, at the limit of poor image quality and complex image features, these techniques fail. It is at this limit that deep convolutional neural network (DCNN) approaches excel. Our research studies the interaction of individual immune cells and their shape changes relative to inflammatory immune reactions1 using multi-channel immunofluorescence imaging of renal biopsies from patients with inflammatory kidney disease. We present here a deep learning methodology for application to nuclear and cell membrane immunofluorescent stains to automatically segment and classify multiple T-cell and dendritic cell types. With both T-cells and dendritic cells segmented, we are able to study how T-cells with different surface antigens change shape with proximity to dendritic cells. Shape changes are seen when T-cells move close to dendritic cells and interact. We use a sliding window, max-filtering DCNN to segment and classify 3 cell types from 6 image stains channels within a single DCNN. This DCNN maintains images at original resolution throughout the network using dilated convolutions and max-filtering in place of max pooling layers. In addition, we use 3D convolution kernels with two spatial dimensions and one channel dimension. This allows us to output a multi-class binary classification based on multichannel data at the original image resolution. We trained and validated the network across 24 patients with 8,572 segmented cells. Our results demonstrate a mean Dice-Sorensen score of 0.78 +/- 0.18, a mean classification sensitivity of 0.76, and a mean classification specificity of 0.75 across all 3 segmented cell types.