Designing communication channels for multiagent is a feasible method to conduct decentralized learning, especially in partially observable environments or large-scale multiagent systems. In this work, a communication model with dual-level recurrence is developed to provide a more efficient communication mechanism for the multiagent reinforcement learning field. The communications are conducted by a gated-attention-based recurrent network, in which the historical states are taken into account and regarded as the second-level recurrence. We separate communication messages from memories in the recurrent model so that the proposed communication flow can adapt changeable communication objects in the case of limited communication, and the communication results are fair to every agent. We provide a sufficient discussion about our method in both partially observable and fully observable environments. The results of several experiments suggest our method outperforms the existing decentralized communication frameworks and the corresponding centralized training method.