Many real-world machine learning tasks have multiple objectives, such as multi-object detection and product recommendation, which can not be optimized directly through a single objective function. Fortunately, multi-objective learning can be used to solve this problem efficiently by some vector-valued algorithms. Recently, researchers find that the performance of multi-objective learning will be impaired when the mixture weights are unknown, where a fixed algorithm is difficult to select the optimal model in the hypothesis space. Thus, agnostic multi-objective learning has been proposed, which provides an effective approach to solve the problem of simultaneously optimizing multiple objectives with unknown mixture weights. In this way, a proper model will be selected because the agnostic multi-objective learning can improve the worst case of the hypothesis space. However, the current generalization error bounds for agnostic multi-objective learning can not converge faster than O(1/root n), which limits the generalization guarantee. In this paper, we provide a sharper excess risk bound for agnostic multi-objective learning with convergence rate of O(1/n), which is much faster than the existing results and matches the best theoretical results of centralized learning. Based on our theory, we then propose a novel algorithm to improve the generalization performance of agnostic multi-objective learning.