Long-tailed methods have gained increasing attention and achieved excellent performance due to the long-tailed distribution in graphs, i.e., many small-degree tail nodes have limited structural connectivity. However, real-world graphs are inevitably noisy or incomplete due to error-prone data acquisition or perturbations, which may violate the assumption that the raw graph structure is ideal for long-tailed methods. To address this issue, we study the impact of graph perturbation on the performance of long-tailed methods, and propose a novel GNN-based framework called LTSL-GNN for graph structure learning and tail node embedding enhancement. LTSL-GNN iteratively learns the graph structure and tail node embedding enhancement parameters, allowing information-rich head nodes to optimize the graph structure through multi-metric learning and further enhancing the embeddings of the tail nodes with the learned graph structure. Experimental results on six real-world datasets demonstrate that LTSL-GNN outperforms other state-of-the-art baselines, especially when the graph structure is disturbed.