Recently, there has been a growing interest in Semantic Communication (SemCom) frameworks that aim to enhance intelligent communications by exploiting the intended meaning of transmitted information. In this context, some researchers have introduced federated learning (FL) to train semantic models effectively and efficiently, while keeping private data on the respective devices. However, publicly sharing model updates during co-training in SemCom can potentially lead to the privacy leakage. To address this issue, this paper conducts membership inference attacks (MIA) against FL-based SemCom cotraining processes. Through experiments, we discover instances of privacy leakage, with the rate of leakage varying as the models converge during training. Based on these findings, we propose the Adaptive Privacy Budget-based Differential Privacy (APBDP) method for secure and effective semantic model training. APB-DP utilizes differential privacy (DP) to safeguard against MIA by introducing artificial noise during the training process, while also dynamically adapting the privacy budget (i.e., the level of noise) as the models converge. This ensures that the privacy protection remains effective throughout the training process. On the other hand, APB-DP takes into account the impact of wireless channels to prevent unnecessary interference. Simulation results show that APB-DP significantly reduces privacy leakage rate by 13% compared to FL-based SemCom. Additionally, it reduces performance loss rate by 71% compared to the state-of-the-art DP-based model training scheme known as NbAFL.