The algorithms used in semiconductor device simulation are investigated. Inversion algorithms, such as SOR, SI, generalized ICCG and Crout method are compared in terms of convergence and required computer resources for various devices and bias conditions. For linearization of the basic equations, a quasi-coupled method is compared with Gummel's conventional decoupled method. Numerical experimentation shows that even the SOR method, which has the slowest convergence among these algorithms, efficiently provides good results when used properly. The quasi-coupled method is also effective in linearizing the basic equations for transient analysis or high bias conditions without a significant increase in the required memory. Consequently, a properly used simulator having several algorithms is shown to be necessary for two- and three-dimensional analysis. Guidelines for applying these numerical algorithms effectively are described.