Diabetic retinopathy (DR) is an eye disease caused by diabetes that leads to impaired vision and even blindness. DR segmentation technology can assist ophthalmologists with early diagnosis, which can help to prevent the progression of this disease. However, DR segmentation is a challenging task because of the large variation in scale, high inter-class similarity, complex structures, blurred edges and different brightness contrasts of different kinds of lesions. Most existing methods tend not to adequately extract the semantic information in the channels of lesion features, which is a critical element for effectively distinguishing lesion edges. In this paper, we propose a dual-branch channel attention enhancement feature fusion network that integrates CNN and Transformer for DR segmentation. First, we introduce a Channel Crossing Attention Module (CCAM) into the UNet framework to eliminate semantic inconsistencies between the encoder and decoder for better integration of contextual information. Moreover, we leverage Transformer's robust global information acquisition capabilities to acquire long-range information, and further enhance the contextual information. Finally, we build a Dual- branch Channel Attention Enhancement Fusion Module (DCAE) to enhance the semantic information of the channels in both branches, which improves the discriminability of the blurred edges of lesions. Compared with the state-of-the-art methods, our method improved mAUPR, mDice, and mIOU by 1.36%, 1.85%, and 2.20% on the IDRiD dataset, and by 4.62%, 0.20%, and 2.60% on the DDR dataset, respectively. The experimental results show that the multi-scale semantic features of the two branches are effectively fused, which achieves accurate lesion segmentation.