Automated segmentation of retinal blood vessels in fundus images plays a key role in providing ophthalmologists with critical insights for the non-invasive diagnosis of common eye diseases. Early and precise detection of these conditions is essential for preserving vision, making vessel segmentation crucial for identifying vascular diseases that pose a threat to vision. However, accurately segmenting blood vessels in fundus images is challenging due to factors such as significant variability in vessel scale and appearance, occlusions, complex backgrounds, variations in image quality, and the intricate branching patterns of retinal vessels. To overcome these challenges, the Unified Gated Swin Transformer with Multi-Feature Full Fusion (UGS-M3F) model has been developed as a powerful deep learning framework tailored for retinal vessel segmentation. UGS-M3F leverages its Unified Multi-Context Feature Fusion (UM2F) and Gated Boundary-Aware Swin Transformer (GBS-T) modules to capture contextual information across different levels. The UM2F module enhances the extraction of detailed vessel features, while the GBS-T module emphasizes small vessel detection and ensures extensive coverage of large vessels. Extensive experimental results on publicly available datasets, including FIVES, DRIVE, STARE, and CHAS_DB1, show that UGS-M3F significantly outperforms existing state-of-the-art methods. Specifically, UGS-M3F achieves a Dice Coefficient (DC) improvement of 2.12% on FIVES, 1.94% on DRIVE, 2.52% on STARE, and 2.14% on CHAS_DB1 compared to the best-performing baseline. This improvement in segmentation accuracy has the potential to revolutionize diagnostic techniques, allowing for more precise disease identification and management across a range of ocular conditions.