For random design nonparametric regression, in the case that the responses are binary and subject to misclassification, the performance of the kernel estimator is investigated. The kernel estimator is generally biased for the local proportion. To adjust for the bias, the double sampling scheme of Tenenbein (1970, 1971) is considered. A plugged-in kernel estimator and an imputed kernel estimator, which adjust for the effect of misclassification on the kernel estimator, are proposed, and their asymptotic mean squared errors are analysed. The plugged-in kernel estimator is better than the simple kernel estimator, which uses only the data without misclassification in the validation subsample, in the sense of having smaller asymptotic mean squared error. However, the imputed kernel estimator has smaller asymptotic variance. If the misclassification probabilities are constant, then the two proposed estimators have the same asymptotic bias. In this case, the imputed kernel estimator is always better than the plugged-in kernel estimator. For general misclassification probabilities, the asymptotic biases of the two proposed estimators are not comparable in magnitude. However, our simulation results demonstrate that, even when the misclassification probabilities are not constant, the imputed kernel estimator is still better for reasonable sample sizes.