MRIQC is a quality control tool that predicts the binary rating (accept/exclude) that human experts would assign to T1-weighted MR images of the human brain. For such prediction, a random forests classifier performs on a vector of image quality metrics (IQMs) extracted from each image. Although MRIQC achieved an out-of-sample accuracy of similar to 76% we concluded that this performance on new, unseen datasets would likely improve after addressing two problems. First, we found that IQMs show "site-effects" since they are highly correlated with the acquisition center and imaging parameters. Second, the high inter-rater variability suggests the presence of annotation errors in the labels of both training and test data sets. Annotation errors may be accentuated by some preprocessing decisions. Here, we confirm the "site-effects" in our IQMs using t-student Stochastic Neighbour Embedding (t-SNE). We also improve by a similar to 10% accuracy increment on the out-of-sample prediction of MRIQC by revising a label binarization step in MRIQC. Reliable and automated QC of MRI is in high demand for the increasingly large samples currently being acquired. We show here one iteration to improve the performance of MRIQC on this task, by investigating two challenging problems: site-effects and noise in the labels assigned by human experts.