Visible and infrared image fusion aims to generate fused images with comprehensive scene understanding and detailed contextual information. However, existing methods often struggle to adequately handle relationships between different modalities and optimize for downstream applications. To address these challenges, we propose a novel scene-semantic decomposition-based approach for visible and infrared image fusion, termed SSDFusion. Our method employs a multi-level encoder-fusion network with fusion modules implementing the proposed scene-semantic decomposition and fusion strategy to extract and fuse scene-related and semantic- related components, respectively, and inject the fused semantics into scene features, enriching the contextual information infused features while sustaining fidelity of fused images. Moreover, we further incorporate meta-feature embedding to connect the encoder-fusion network with the downstream application network during the training process, enhancing our method's ability to extract semantics, optimize the fusion effect, and serve tasks such as semantic segmentation. Extensive experiments demonstrate that SSDFusion achieves state-of-the-art image fusion performance while enhancing results on semantic segmentation tasks. Our approach bridges the gap between feature decomposition-based image fusion and high-level vision applications, providing amore effective paradigm for multi-modal image fusion. The code is available at https://github.com/YiXian-Xiao/SSDFusion.