In this study, a novel two-step optimization model is developed for maximizing the amount of internal power trading in a distribution network comprising several networked microgrids. In the first step, a soft actor-critic-based optimization model is developed to help the retailer agent in determining dynamic internal trading prices for its local microgrid network. A better internal price encourages microgrids to increase the amount of internal power trading, and thus the retailer's profit is also increased. Unlike deep Q learning-based methods, the proposed method is able to handle large state and action spaces. In addition, using entropy-regularized reinforcement learning helps to accelerate and stabilize the learning process and also prevents trapping in local optima. In the second step, an optimization model is developed to facilitate internal trading among various networked microgrids using a cooperative strategy. Since the policy network plays the role of an approximator, the learning model can handle uncertainties in the distribution network. Finally, results of the proposed model show the superiority of the proposed model over the direct power trading schemes.