Recently, many works on Graph Neural Networks (GNNs) have been well developed for graph-level representation learning tasks and continuously improve graph classification accuracy. These works, however, mainly focus on the principle of different blocks of GNNs to strengthen their representation learning capability and pay less attention to the choice of a more appropriate loss function as learning objectives. In this paper, we aim to facilitate the representation learning process of any existing GNN frameworks by incorporating more task-oriented objectives. To this end, we propose approaches based on Laplacian eigenmaps to enhance the common-used cross-entropy loss, called LEELoss. Our study shows, with the property of Laplacian eigenmaps for classification problems, that utilizing the Laplacian eigenmaps as a regularizer in the original loss function can further enhance the performance of the graph-level learning tasks. Finally, via extensive experiments on popular benchmark datasets, we demonstrate that the proposed approach actually facilitates the classification performance on various GNN frameworks.