Applying deep learning (DL) models to medical image processing has yielded impressive performances. Generative adversarial networks (GAN) are examples of DL models proposed to mediate the challenge of insufficient image data by synthesizing data similar to the real sample images. However, we observed that benchmark datasets with the region of interest (ROIs) for characterizing abnormalities in breast cancer using digital mammography have only limited image samples with architectural distortion abnormalities. Images with architectural distortion abnormality in digital mammograms are sparsely distributed across all publicly available datasets. This paper proposes a GAN model named ArchGAN, which is aimed at synthesizing ROI-based samples with architectural distortion abnormality for training convolutional neural networks (CNNs). Our approach involves the design of the GAN model, consisting of both generator and discriminator, to learn a hierarchy of representation in digital mammograms. The proposed GAN model was applied to Mammographic Image Analysis Society (MIAS) and Digital Database for Screening Mammography (DDSM) datasets. In addition, the quality of images generated was also evaluated using PSNR, SSIM, FSIM, BRISQUE, PQUE, NIQUE, and FID. Results obtained showed that ArchGAN performed very well. The outcome of this study is a model for augmenting CNN models with ROI-centric images with architectural distortion abnormalities in digital mammography.