Domain gaps between different datasets limit the generalization ability of CNN models. Precise evaluation on the domain gap has potential to assist the promotion of CNN generalization ability. This paper pro -poses a computational framework to evaluate gaps between different domains, e.g., judging which one of source domains is closer to the target domain. Our model is based on the observation that, given a well-trained classifier on the source domain, the entropy of its classification scores of the output layer can be used as an indicator of the domain gap. For instance, smaller domain gap generally corresponds to smaller entropy of classification scores. To further boost the discriminative power in distinguishing domain gaps, a novel training strategy is proposed to supervise the model to produce smaller entropy on one source domain and larger entropy on other source domains. This supervision leads to an efficient and discriminative domain gap evaluation model. Extensive experiments on multiple datasets including faces, vehicles, fashions, and persons, etc . show that our method can reasonably measure domain gaps. We further conduct experiments on domain adaptive person ReID task and our method is adopted to pre-trained model selection, pre-trained model fusion, source dataset fusion, and source dataset selection. As shown in the experiments, our method substantially boosts the ReID accuracy. To the best of our knowl-edge, this is an original work focusing on computational domain gap evaluation. Our code is available at https://github.com/liu-xb/DomainGapEvaluation . (c) 2021 Published by Elsevier Ltd.