In mobile edge computing (MEC), offloading computing tasks from edge clients to edge nodes can reduce the burden on edge clients, especially for delay-sensitive tasks, they must be completed within the deadline. However, when the edge nodes receive a large number of tasks, the waiting time of the tasks may be too long, and some tasks may even be dropped due to timeout. To address these problems, we model task offloading as a long-term optimization problem based on Markov decision process (MDP), we consider task queuing models on edge clients and edge nodes to optimize distributed task offloading schemes, and the current workload of edge nodes prediction model for dynamic task scheduling to avoid excessive workload of edge nodes. A distributed dynamic task offloading algorithm based on deep reinforcement learning is proposed, and a recurrent neural network controlled by Gated Recurrent Unit (GRU) and Dueling-DQN and Double-DQN (DDQN) techniques enable each client to make its own offloading decisions without knowing other information. In order to improve the training efficiency and the stability of the strategy, a queue selection algorithm is proposed to reduce the action space. Experimental results show that, compared with some existing algorithms, the proposed algorithm can effectively predict the workload of edge nodes and make reasonable offloading decisions, significantly reducing the average energy consumption and delay of edge clients, as well as the ratio of dropped tasks.