This article presents an empirical approach that demonstrates a theoretical connection between (information theoretic) entropy measures and the finite-time performance of the simulated annealing algorithm. The methodology developed reads to several computational approaches for creating problem instances useful in testing and demonstrating the entropy/performance connection: use of generic configuration spaces, polynomial transformations between NP-hard problems, and modification of penalty parameters. In particular, the computational results show that higher entropy measures are associated with superior finite-time performance of the simulated annealing algorithm.