Translator Disclaimer
16 October 2020 Effective background removal method based on generative adversary networks
Author Affiliations +

It is a challenge to remove the cluttered background in research of hand gesture images. The popular method, image semantic segmentation, is still not efficient enough to deal well with fine-grained image background removal due to insufficient training samples. We are the first to propose a background removal method based on a conditional generative adversarial network (CGAN). With CGAN, our method is designed to translate the images with backgrounds to the ones without backgrounds. The proposed method does not rely on the traditional image-to-semantics complex processing, and instead, performs an image-to-image task. The image is generated without background, and a discriminator decides whether backgrounds exist in output images. With an iterative training generator and discriminator, it is easy to fulfill two goals: (i) improving the discriminator’s ability to recognize whether the generated images have backgrounds and (ii) enhancing the generator’s ability to remove backgrounds. For our study, a large number of gesture images were collected and simulated to conduct experiments. The results demonstrate that the proposed method achieves remarkable performance in background removal for different gesture images. The training is robust, and the simple network has generalization ability among different hand gestures.

© 2020 SPIE and IS&T 1017-9909/2020/$28.00© 2020 SPIE and IS&T
Qingfei Wang, Shu Li, Changbo Wang, and Menghan Dai "Effective background removal method based on generative adversary networks," Journal of Electronic Imaging 29(5), 053014 (16 October 2020).
Received: 14 February 2020; Accepted: 5 October 2020; Published: 16 October 2020

Back to Top