Most graph neural networks (GNNs) suffer from the over-smoothing problem which limits further improvement of performance. Hence, many studies have decoupled the GNN into two atomic operations, the propagation (P) operation and the transformation (T) operation to propose a paradigm named decoupled graph neural networks (DGNNs) for alleviating this problem. Since manually designing the architecture of DGNNs is time-consuming and expert-dependent, the decoupled graph neural architecture search (DGNAS) methods were proposed and achieved success. However, existing DGNAS methods lack explanation in the design of DGNN architecture with adaptive variable P operation, which hinders researchers from further exploring DGNAS methods. In addition, the naive evolutionary search algorithm used by previous DGNAS methods lacks constraints on the search direction, limiting its search efficiency in exploring the DGNN architecture. To address the above challenges, we propose the decoupled graph neural architecture search with explainable variable propagation operation (DGNAS-EP) method. Specifically, we propose the mean distinguishability (MD) metric to measure the distinguishable state of node representation, which effectively explains the significance of why the DGNAS method should build DGNN architectures with variable P operation. Graphs with different distributions require different P operations in DGNN architecture to adaptively adjust, thus obtaining the optimal MD, which is very important for improving the performance of DGNNs. Furthermore, DGNAS-EP utilizes the explored historical DGNN architectures as prior knowledge to constrain the search direction based on evolutionary state, which effectively improves the search efficiency of the DGNAS method. The experiments on real-world graphs show that our proposed method DGNAS-EP outperforms state-of-the-art baseline methods. Codes are available at https://github.com/frankdoge/DGNAS-EP.git.