The two most popular model selection rules in the signal processing literature have been the Akaike's criterion AIC and the Rissanen's principle of minimum description length MDL. These rules are similar in form in that they both consist of data and penalty terms. Their data terms are identical, but the penalties are different, the MDL being more stringent toward overparameterization. The AIC penalizes for each additional model parameter with an equal incremental amount of penalty, regardless of the parameter's role in the model. In most of the literature on model selection, the MDL appears in a form that also suggests equal penalty for every unknown parameter. To this MDL criterion, we refer to as the naive MDL. In this paper, we show that identical penalization for every parameter is not appropriate and that the penalty has to depend on the model structure and type of model parameters. The approach to showing this is Bayesian, and it relies on large sample theory. We derive maximum a posteriori (MAP) rules for several different families of competing models and obtain forms that are similar to the AIC and the naive MDL. For some families, however, we find that the derived penalties are different. In those cases, our extensive simulations show that the MAP rule outperforms the AIC and the naive MDL.