Journal of Electronic Imaging, Vol. 28, Issue 05, 053030, (October 2019) https://doi.org/10.1117/1.JEI.28.5.053030
TOPICS: Statistical modeling, Gallium nitride, Data modeling, Machine learning, Error analysis, Binary data, Neural networks, Facial recognition systems, Alternate lighting of surfaces, Optimization (mathematics)
Joint distribution matching based on generative adversarial nets (GANs) is an effective method to alleviate the insufficient diversity of labeled samples in semisupervised learning. In fact, in addition to the existing samples, generated samples and corresponding predictive labels must be taken into account to further increase the diversity of labeled samples and the controllability of generated samples. However, current works have not considered this. Therefore, a semisupervised learning model with adversarial training among joint distributions is proposed. The model consists of a generator, a classifier, and three discriminators incorporated with four joint distributions of samples and labels. The theoretical research indicates that, when the model reaches equilibrium, the classifier happens to be the inference network of the generator. Hence, the controllability of the generator and the generalization ability of the classifier are mutually improved. In semisupervised classification experiments, our model achieved state-of-the-art error rates of 0.59%, 16.45%, and 4.86% on MNIST, CIFAR10, and SVHN datasets, respectively. When only 20 labels are available on MNIST dataset, the error rate dropped from the current best of 4% to 1.09%, which indicates that the model is extremely robust to the number of labels. Meanwhile, the model also shows competitiveness in semisupervised generation.