Graph neural networks (GNNs) have shown remarkable success in many fields. However, the results of different model architectures for different scenarios can be very different. Designing effective neural architectures requires a great deal of specialized knowledge, which limits the application of GNNs models. In recent years, graph neural architecture search (GNAS) has attracted widespread attention. GNAS selects the GNNs structure in predefined search space using a suitable search algorithm. The search direction is constrained based on the evaluation made by the estimation strategy. Traditional GNAS methods suffer from long search times, difficulty in parameter selection, and high sensitivity to data quality. When feature information is missing, the candidate architectures explored during the search process cannot obtain complete feature information, which significantly reduces the accuracy of GNAS. To tackle these challenges, we propose a novel optimization framework for parallel graph neural architecture search, named AutoPGO. In AutoPGO, we complement the features based on a feature propagation algorithm generated by minimizing the Dirichlet energy function, improve the search algorithm using the mutation decay strategy and complete the optimization of the parameters using the Bayesian optimization method. Experimental results show that AutoPGO has good performance and some degree of robustness.