While many aspects of the image recognition problem have been largely solved by presenting large datasets to convolutional neural networks, there is still much work to do when data is sparse. For synthetic aperture radar (SAR), there is a lack of data that stems both from the cost of collecting data as well as the small size of the community that collects and uses such data. In this case, electromagnetic simulation is an effective stopgap measure, but its effectiveness at mirroring reality is upper bounded both by the quality of the electromagnetic prediction code as well as the fidelity of the target's digital model. In practice, we find that classification models trained on synthetic data generalize poorly to measured data. In this work, we investigate three machine learning networks, with the goal of using the network to bridge the gap between measured and synthetic data. We experiment with two types of generative adversarial networks as well as a modification of a convolutional autoencoder. Each network tackles a different aspect in the problem of the disparity between measured and synthetic data, namely: generating new, realistic, labeled data; translating data between the measured and synthetic domain; and joining the manifold of the two domains into an intermediate representation. Classification results using widely-employed neural network classifiers are presented for each experiment; these results suggest that such data manipulation improves classification generalization for measured data.