Inspired by the recent success of generative adversarial networks (GANs), we propose a multi-agent GANs, named 3DJoinGANs, for handling imbalanced training data for the task of semantic segmentation. Our proposed method comprises two conditional GANs with four agents: a couple segmentors and a couple discriminators. The proposed framework learns a joint distribution of magnetic resonance images (MRI) and computed tomography images (CT) from different brain diseases by enforcing a weight-sharing constraint. While the first segmentor is trained on 3D multi-model MRI to learn semantic segmentation of a brain tumor(s), the first discriminator classifies whether predicted output by segmentor is real or fake. On the other hand, the second segmentor takes 3D multi modal CT images to learn segmentation of brain stroke lesions, and the second discriminator classifies between a segmented output by segmentor and a ground truth data annotated by an expert. We investigate, the 3DJoinGANs is able to mitigate imbalanced data problems and improve segmentation results due to oversampling and training through a joint distribution of cross-domain images.The proposed architecture has shown promising performance on the ISLES-2018 benchmark for segmentation of 3D multi modal ischemic stroke lesions and semantic segmentation of 3D multi modal brain tumors from the BraTS-2018 challenge.