Person reidentification is the process of matching individuals from images taken of them at different times and often with different cameras. To perform matching, most methods extract features from the entire image; however, this gives no consideration to the spatial context of the information present in the image. We propose using a convolutional neural network approach based on ResNet-50 to predict the foreground of an image: the parts with the head, torso, and limbs of a person. With this information, we use the LOMO and salient color name feature descriptors to extract features primarily from the foreground areas. In addition, we use a distance metric learning technique (XQDA), to calculate optimally weighted distances between the relevant features. We evaluate on the VIPeR, QMUL GRID, and CUHK03 data sets and compare our results against a linear foreground estimation method, and show competitive or better overall matching performance.