For the large-scale linear discrete ill-posed problem min parallel to Ax - b parallel to or Ax = b with b contaminated by Gaussian white noise, the Lanczos bidiagonalization based Krylov solver LSQR and its mathematically equivalent CGLS, the Conjugate Gradient (CG) method implicitly applied to A(T)Ax = A(T) b, are most commonly used, and CGME, the CG method applied to min parallel to AA(T)y - b parallel to or AA(T)y = b with x = A(T)y, and LSMR, which is equivalent to the minimal residual (MINRES) method applied to A(T)Ax = A(T)b, have also been choices. These methods exhibit typical semi-convergence feature, and the iteration number k plays the role of the regularization parameter. However, there has been no definitive answer to the long-standing fundamental question: Can LSQR and CGLS find 2-norm filtering best possible regularized solutions? The same question is for CGME and LSMR too. At iteration k, LSQR, CGME and LSMR compute different iterates from the same k dimensional Krylov subspace. A first and fundamental step towards answering the above question is to accurately estimate the accuracy of the underlying k dimensional Krylov subspace approximating the k dimensional dominant right singular subspace of A. Assuming that the singular values of A are simple, we present a general sin O theorem for the 2-norm distances between these two subspaces and derive accurate estimates on them for severely, moderately and mildly ill-posed problems. We also establish some relationships between the smallest Ritz values and these distances. Numerical experiments justify the sharpness of our results. (C) 2020 Elsevier B.V. All rights reserved.