The significant increase in the computation and parameter storage costs of CNNs promotes its development in various applications and restricts its deployment in edge devices as well. Therefore, many neural network pruning methods has been proposed for neural network compression and acceleration. However, there are two major limitations to these methods: First, prevailing methods usually design single pruning criteria for the primitive network and fail to consider the diversity of potential optimal sub-network structure. Second, these methods utilize traditional training method to train the sub-network, which is not enough to develop the expression ability of the sub-network under the current task.In this paper, we propose Model Selection - Knowledge Distillation (MS-KD) framework to solve the above problems. Specifically, we develop multiple pruning criteria for the primitive network, and the potential optimal structure is obtained through model selection.Furthermore, instead of traditional training methods, we use knowledge distillation to train the learned sub-network and make full use of the structure advantages of the sub-network.To validate our approach, we conduct extensive experiments on prevalent image classification datasets.The results demonstrate that our MS-KD framework outperforms the existing methods under a wide range of data sets, models, and inference costs.