In order to quickly suppress the rapid fluctuations of reactive power and voltage caused by the random output change of distributed energies, machine learning (ML) methods represented by deep reinforcement learning (DRL) and imitation learning (IL) have been applied to volt-var control (VVC) research recently, to replace the traditional methods which require a large number of iterations. Although the ML methods in the existing literature can realize the online rapid VVC optimization, there are still some shortcomings such as slow offline training speed and insufficient universality that hinder their applications in practice. Firstly, this paper proposes a single-agent simplified DRL (SASDRL) method suitable for the centralized control of transmission networks. Based on the classic "Actor-Critic" architecture and the fact that the Actor network can generate wonderful control strategies heavily depends on whether the Critic network can make accurate evaluation, this method simplifies and improves the offline training process of DRL based VVC, whose core ideas are the simplification of Critic network training and the change in the update mode of Actor and Critic network. It simplifies the sequential decision problem set in the traditional DRL based VVC to a single point decision problem and the output of Critic network is transformed from the original sequential action value into the reward value corresponding to the current control strategy. In addition, by training the Critic network in advance to help the accelerated convergence of Actor network, it solves the computational waste problem caused by the random search of agent in the early training stage which greatly improves the offline training speed, and retains the DRL’s advantages like without using massive labeled data and strong universality. Secondly, a multi-agent simplified DRL method (MASDRL) suitable for decentralized and zero-communication control of active distribution network is proposed. This method generalizes the core idea of SASDRL to form a multi-agent version and continues to accelerate the convergence performance of Actor network of each agent on the basis of training the unified Critic network in advance. Each agent corresponds to a different VVC device in the system. During online application, each agent only uses the local information of the node connected to the VVC device to generate the control strategy through its own Actor network independently. Besides, it adopts IL for initialization to inject the global optimization idea into each agent in advance, and improves the local collaborative control effect between various VVC devices. Simulation results on the improved IEEE 118-bus system show that SASDRL and MASDRL both achieve the best control results of VVC among all the compared methods. In terms of offline training speed, SASDRL consumes the least amount of training time, whose speed is 4.47 times faster than the traditional DRL and 50.76 times faster than IL. 87.1% of SASDRL's training time is spent on generating the expert samples required for the supervised training of Critic network while only 12.9% is consumed by the training of Actor and Critic network. Regarding MASDRL, it can realize the 82.77% reduction in offline training time compared to traditional MADRL. The following conclusions can be drawn from the simulation analysis: (1) Compared with traditional mathematical methods and existing ML methods, SASDRL is able to obtain excellent control results similar to mathematical methods while greatly accelerating the offline training speed of DRL based VVC. (2) Compared with traditional MADRL, by the inheritance of SASDRL’ core ideas and the introduction of IL into the initialization of Actor network, the method of MASDRL+IL proposed can improve the local collaborative control effect between various VVC devices and offline training speed significantly. © 2024 China Machine Press. All rights reserved.