Cramer-Rao lower bound (CRB) has been extensively used in parameter estimation performance analysis. It can be achieved by a maximum likelihood estimator (MLE) for sufficiently high SNR, however below certain SNR, the MLE mean-square error departs significantly from the CRB, displaying a threshold behavior. This departure is partially attributed to the fact that an MLE with nonlinear parameter dependence is biased at low SNR while a local performance bound such as the CRB is limited to unbiased estimates. Indeed the information theory inequality, from which the CRB can be derived, does include some bias-related terms, which are ignored in the commonly-referred form of the CRB (i.e., inverse of the Fisher information) due to evaluation difficulty. Using a first-order approximation of the MLE bias, this paper presents a complete CRB including both the bias contribution and the Fisher information, and applies it to array-based bearing estimation analysis. Evaluation examples demonstrate that the revised CRB displays some threshold behavior; however it is still far apart from the MLE simulations. The results suggest that 1) a more accurate bias estimation must be exploited to make the approach practically meaningful; and 2) it may make more sense to combine the bias contribution with a large error bound such as the Barankin bound so that both mainlobe and sidelobe ambiguities are taken into account.