In semantic segmentation, current deep convolutional neural networks rely heavily on extensive data to achieve superior segmentation results. However, these deep models have poor generalization ability across different domain datasets. To alleviate the degradation of the model's performance in different domains, unsupervised domain adaptation attempts to transfer the knowledge from a labeled source domain to an unlabeled target domain. Most previous unsupervised domain adaptation methods use adversarial training or self-training to minimize the distribution discrepancy between the source and target domains, often ignoring inter-class discriminative learning, contextual structural integrity, and the class distribution information of the pseudo-labeled data in the target domain. To correctly align semantic information between cross-domain data, we employ unsupervised domain adaptation via class center-based contrastive learning (C3L) and complementary region-class Mixing (RCM) data augmentation. Firstly, we introduce class center-based contrastive learning to enhance inter-class discriminative learning. By establishing class centers in the feature space and encouraging pixels to be closer to their respective class centers while moving away from others. we expect that pixels of the same category should have high representation similarity and the inter-class discriminative capability of the domain adaptation methods is significantly improved. Second, for self-training, we take into account the complementarity between the target domain samples in the confidence region and the class distribution, construct a region-class complementarity matrix, and reconstruct the two complementary target domain images into new samples with complete contextual details and rich class distribution information. Our goal is to improve the performance of the semantic segmentation model on the target domain. In two classic unsupervised domain adaptation tasks for semantic segmentation, the proposed method demonstrates significant performance enhancements compared to baseline methods and remains competitive with state-of-the-art methods. © 2013 IEEE.