Fake news detection has received increasing attention from researchers in recent years, especially in the area of multimodal fake news detection involving both text and images. However, many previous studies have simply fed the semantic features of both text and image modalities into a binary classifier after applying basic concatenation or attention mechanisms, where these features often contain a significant amount of inherent noise. This, in turn, leads to both intra-and inter-modal uncertainty. In addition, while methods based on simple concatenation of the two modalities have achieved notable results, they often ignore the drawback of applying fixed weights across modalities, which causes some high-impact features to be ignored. To address these issues, we propose a novel semantic-level multimodal dynamic fusion framework for fake news detection (MDF-FND). To the best of our knowledge, this is the first attempt to develop a dynamic fusion framework for semantic-level multimodal fake news detection. Specifically, our model consists of two main components: (1) the Uncertainty Estimation Module (UEM), which is an uncertainty modeling module that uses a multi-head attention mechanism to model intra-modal uncertainty, and (2) the Dynamic Fusion Network, which is based on Dempster-Shafer evidence theory (DFN) and is designed to dynamically integrate the weights of both text and image modalities. To further enhance the dynamic fusion framework, a graph attention network is employed for inter-modal uncertainty modeling before DFN. Extensive experiments have demonstrated the effectiveness of our model across three datasets, with a performance improvement of up to 4% on the Twitter dataset, achieving state-of-the-art performance. We also conducted a systematic ablation study to gain insights into our motivation and architectural design. Our model is publicly available at https://github.com/CoisiniStar/MDF-FND.