The presence of unwanted artifacts can cause diverse problems in skin lesion analyses, which are obstacles for practical applications. Researchers have demonstrated that the elimination of artifacts, such as hairs, can improve the performance of diagnostic systems. Systematically, this facilitates the identification of abnormalities and increases the survival rate of patients. In recent years, attention mechanisms have shown great performance in various research fields. However, its applications in hair removal have not been fully explored. In this paper, we propose an effective generative adversarial network, called EA-GAN, that is mainly suitable for hair removal from dermoscopic images. In the generator network, we introduced a modified Attention U-Net architecture, which combines convolutional layers and attention gate modules to focus on target features while ignoring other irrelevant information. Furthermore, vision transformer encoder blocks are used in the discriminator to encode long-range image properties. We created a new dataset to train and test our model by selecting images from various publicly available datasets, such as ISIC Challenge 2020, HAM10000, and PH2. To assess the effectiveness of our model, we adopt diverse evaluation metrics including Peak signal-to-noise ratio, structural similarity index, multiscale structural similarity index, learned perceptual image patch similarity, for which our method EA-GAN achieves values of 40.97, 0.950, 0.985 and 0.030, respectively. Experimental results show that the proposed method achieves competitive performance compared to the state-of-the-art methods. The model can remove hairs of different thicknesses and densities, as well as ruler marks, while maintaining the overall image quality. Our code is publicly available at https://github.com/ghadahall/EAGAN.