Rendering synthetic imagery from gaming engine environments allows us to create data featuring any number of object orientations, conditions, and lighting variations. This capability is particularly useful in classification tasks, where there is an overwhelming lack of labeled data needed to train state-of-the-art machine learning algorithms. However, the use of synthetic data is not without limit: in the case of imagery, training a deep learning model on purely synthetic data typically yields poor results when applied to real world imagery. Previous work shows that "domain adaptation," mixing real-world and synthetic data, improves performance on a target dataset. In this paper, we train a deep neural network with synthetic imagery, including ordnance and overhead ship imagery and investigate a variety of methods to adapt our model to a dataset of real images.
One aspect in which Capsule Networks have shown promise is in their ability to perform classification tasks given limited datasets; outperforming the accuracy of other models in some cases. This capability is applicable to maritime classification tasks where there is a lack of labeled data needed to train machine learning algorithms. For these reasons Capsule Networks lend themselves well to applying their unique network architecture to the maritime vessel BCCT dataset, which exhibits characteristics aligned with those in which Capsule Networks excel and on which it has proven difficult to train even when augmented with synthetic data. Comparing the use of such networks with respect to more traditional architectures and data augmentation techniques provides a potential roadmap for incorporating these networks into future classification tasks involving imagery in data starved domains. In this paper we present our results on the classification of maritime vessels using a Capsule Network and explore its usefulness at this task given the current state of their development.