Graph convolutional networks (GCNs) play a critical role in improving the performance of collaborative filtering. They leverage the concept of aggregating neighbor information to capture user preferences on bipartite graphs by stacking multiple convolutional layers. However, this requirement for layer stacking often leads to a long training lime for convergence, and results in indistinguishable representations with significant performance deterioration due to the problem of oversmoothing. Additionally, the noise of interactions will be amplified by the stacking of convolutional layers through message passing. To address these issues, we propose a simple, plug-and-play Neighborhood Structure Embedding approach, named NSE, which utilizes first-order adjacency information to construct structural embeddings. By explicitly incorporating local topologically statistical information before message passing, the embeddings propagated at GCNs have better topology-structure awareness. This leads to an improved optimization path and greater robustness against noise propagation. Experimental results demonstrate significant performance improvements by employing our proposed NSE in graph collaborative filtering models. Particularly, the NSE-enhanced LGCN shows performance gains of 5.06% and 4.86% on the Yelp and Amazon -Books datasets, respectively. The average training convergence speed is improved by 204.8%. NSE-enhanced graph collaborative filtering has also demonstrated excellent robustness against both noise and oversmoothing.