The massive spread of rumors on social networks has caused serious adverse effects on individuals and society, increasing the urgency of rumor detection. Existing detection methods based on deep learning have achieved fruitful results by virtue of their powerful semantic representation capabilities. However, the centralized training mode and the reliance on extensive training data containing user privacy pose significant risks of privacy abuse or leakage. Although federated learning with client-level differential privacy provides a potential solution, it results in a dramatic decline in model performance. To address these issues, we propose a Federated Graph Knowledge Distillation framework (FedGKD), which aims to effectively identify rumors while preserving user privacy. In this framework, we implement anonymization from both the feature and structure dimensions of graphs, and apply differential privacy only to sensitive features to prevent significant deviation in data statistics. Additionally, to improve model generalization performance in federated settings, we learn a lightweight generator at the server to extract global knowledge through knowledge distillation. This knowledge is then broadcast to clients as inductive experience to regulate their local training. Extensive experiments on four publicly available datasets demonstrate that FedGKD outperforms strong baselines and displays outstanding privacy-preserving capabilities.