This article examines four problems with past evaluations of presidential election forecasting and suggests one aspect of the models that could be improved. Past criticism has had problems with establishing an overall appraisal of the forecasting equations, in assessing the accuracy of both the forecasting models and their forecasts of individual election results, in identifying the theoretical foundations of forecasts, and in distinguishing between data-mining and learning in model revisions. I contend that overall assessments are innately arbitrary, that benchmarks can be established for reasonable evaluations of forecast accuracy, that blanket assessments of forecasts are unwarranted, that there are strong (but necessarily limited) theoretical foundations for the models, and that models should be revised in the light of experience, while remaining careful to avoid data-mining. The article also examines the question of whether current forecasting models grounded in retrospective voting theory should be revised to take into account the partial-referendum nature of non-incumbent, open-seat elections such as the 2008 election. (C) 2008 International Institute of Forecasters. Published by Elsevier B.V. All rights reserved.