Nearly all existing scene graph generation (SGG) models have overlooked the ground-truth annotation qualities of mainstream SGG datasets, i.e., they assume: 1) all the manually annotated positive samples are equally correct; 2) all the un-annotated negative samples are absolutely background. In this article, we argue that neither of the assumptions applies to SGG: there are numerous "noisy" ground-truth predicate labels that break these two assumptions and harm the training of unbiased SGG models. To this end, we propose a novel NoIsy label CorrEction and Sample Training strategy for SGG: NICEST, which rules out these noisy label issues by generating high-quality samples and designing an effective training strategy. Specifically, it consists of: 1) NICE: it detects noisy samples and then reassigns higher-quality soft predicate labels to them. To achieve this goal, NICE contains three main steps: negative Noisy Sample Detection (Neg-NSD), positive NSD (Pos-NSD), and Noisy Sample Correction (NSC). First, in Neg-NSD, it is treated as an out-of-distribution detection problem, and the pseudo labels are assigned to all detected noisy negative samples. Then, in Pos-NSD, we use a density-based clustering algorithm to detect noisy positive samples. Lastly, in NSC, we use weighted KNN to reassign more robust soft predicate labels rather than hard labels to all noisy positive samples. 2) NIST: it is a multi-teacher knowledge distillation based training strategy, which enables the model to learn unbiased fusion knowledge. A dynamic trade-off weighting strategy in NIST is designed to penalize the bias of different teachers. Due to the model-agnostic nature of both NICE and NIST, NICEST can be seamlessly incorporated into any SGG architecture to boost its performance on different predicate categories. In addition, to better assess the generalization ability of SGG models, we propose a new benchmark, VG-OOD, by reorganizing the prevalent VG dataset. This reorganization deliberately makes the predicate distributions between the training and test sets as different as possible for each subject-object category pair. This new benchmark helps disentangle the influence of subject-object category biases. Extensive ablations and results on different backbones and tasks have attested to the effectiveness and generalization ability of each component of NICEST.