Existing image fusion methods focus on aggregate image features from different modalities into a clear and comprehensive image. However, these solutions ignore the importance of gradient features, which results in smooth performance of contrast information in the fused images. In this paper, an edge gradient enhancement method for infrared and visible image fusion is proposed, named EgeFusion. First, the source images are decomposed into a series of base and detail layers through a simple weighted least squares filter. Next, sub-window variance filter is proposed for the fusion of detail layers. For the base layer, a fusion strategy that combines visual saliency mapping with the idea of adaptive weight assignment is designed. The method effectively assigns the saliency features in source images globally, thus providing more comprehensive and valuable information about the region of interest in fused images. Finally, the fused base and detail layers are reconstructed in reverse to obtain the fusion results. The experimental results show that the proposed method has significantly enhanced the gradient features in source images, which makes it easier for the human eye system to focus on the region of interest. Compared with other state-of-the-art fusion methods, the proposed EgeFusion has superior visual quality and acceptable results in infrared and visible, multi-focus, as well as multi-modal medical image fusion. More importantly, our approach achieves performance improvements on object detection.