The growing penetration of distributed energy resources and the application of power electronic devices in the distribution network causes difficulty in accurate network models and frequent changes in the topology. The voltage control based on multi-agent deep reinforcement learning has the advantage of obtaining fast model-free solutions. However, traditional multi-agent deep reinforcement learning methods are unsuitable for real-time changing topologies. To enhance the robustness of the agent strategy after topological changes in the distribution network and improve the performance of the agent strategy in the early stages of training, a voltage control strategy based on incremental learning and knowledge fusion for the distribution network is proposed. Firstly, incremental learning is combined with multi-agent deep reinforcement learning to allow agents to retain the memory of old topologies while training on new ones, preventing catastrophic forgetting. Secondly, a knowledge fusion-based agent policy shielding layer is designed to restrict policy adjustments during training. The proposed method ensures good optimization effects under different topologies and improves training efficiency. Finally, the proposed control strategy is verified on the improved IEEE 33-bus distribution system, and the results show that the agent strategy has good optimization performances under a variety of typical topologies and improves training efficiency.