Few-shot learning has attracted extensive research attention given its capability to classify unseen data from limited samples, potentially addressing the key issue of data scarcity commonly existing in many machine learning tasks. This paper proposes a new transductive learning method that integrates information propagation and prototype rectification in few-shot learning, which achieves state-of-the-art classification performance on four popular datasets. We use first-order information propagation instead of infinite order method to avoid the over-smoothing caused by iterations of information aggregation and node updating in graph neural networks. We further reveal that current transductive few-shot learning models often assume the datasets have balanced classes, which cannot be guaranteed in practice. We thus propose to estimate the distribution of task samples to optimize the number of iterations so as to enhance the robustness of the model. Extensive experiments validate the proposed model and reveal the confirmation bias that could be effectively addressed by the optimization strategy. (C) 2022 Elsevier B.V. All rights reserved.