Interactive recommendation (IR) continuously optimizes performance through sustained interactions between users and the system, thereby capturing dynamic changes in user interests more effectively. Due to the advantages of deep reinforcement learning (DRL) in dynamic optimization and decision-making, researchers have integrated DRL models into interactive recommendations. However, the interactive recommendation still faces the problem of data sparsity, and DRL-based recommendation algorithms often suffer from efficiency issues when handling large-scale discrete action spaces. To address these problems, this paper proposes a GNN-based deep reinforcement learning model, GraphDRL. Specifically, we utilize Graph Neural Networks (GNNs) to obtain embedding representations that effectively model the intricate interactions between users and items, alleviating the data sparsity problem. On this basis, we construct a deep reinforcement learning model with a temporal multi-head attention method to capture users' evolving preferences. Moreover, we propose a dynamic candidate action generation method based on item popularity and embedding representations, which not only more accurately identifies items of interest to users but also reduces the action space, thereby improving recommendation accuracy and efficiency. The superior performance of our algorithm is confirmed through experiments on three public benchmark recommendation datasets and a real-world buyer-supplier interaction dataset.