Early detection has a major contribution to the curability of breast cancer, and using mammographic images, this can be achieved non-invasively. Supervised deep learning, the dominant computer-aided detection (CADe) tool currently, has played a great role in object detection in computer vision, but it suffers from a limiting property: the need of a large amount of labelled data. This becomes stricter when it comes to medical datasets which require high-cost and time-consuming annotations. Furthermore, medical datasets are usually imbalanced, a condition that often hinders classifiers performance. The aim of this paper is to learn the distribution of the minority class to synthesise new samples in order to improve lesion detection in mammography. Deep Convolutional Generative Adversarial Networks (DCGANs) can efficiently generate breast masses. They were trained on increasing-size subsets of a mammographic dataset and used to generate diverse and realistic breast masses. The effect of including the generated images and/or applying horizontal and vertical flipping was tested in an environment where an imbalanced dataset (ratio of 1:10) of masses and normal tissue patches was classified using a fully- convolutional network. A maximum of 0:09 improvement of F1 score was reported by using DCGANs along with flipping augmentation in contrast to using the original images. We show that DCGANs can be used for synthesising photo-realistic breast mass patches with a considerable diversity. It is demonstrated that appending synthetic images in this environment, along with flipping, outperforms the traditional augmentation method of flipping solely, offering faster improvements as a function of the training set size.