Measuring similarity between two image sets is instrumental in many computer vision tasks, such as video face recognition, multi-shot person re-identification and gait recognition. In most of the recent works, it is done by aggregating the embedding features of images as a fixed size vector, and calculating a metric in vector space (i.e. Euclidean distance). The embedding feature function can be learned by deep metric learning (DML) technique. However, methods relying on feature aggregation fail to capture the diversity and uncertainty within image sets. In this paper, we obviate the need of feature aggregation and propose a novel Statistical Distance Metric Learning (SDML) framework, which represents each image set as a probability distribution in embedding feature space and compares two image sets by statistical distance between their distributions. Among all types of statistical distance, we choose Jeffrey's divergence (JD), which can be obtained from two embedding feature sets by kNN based density estimator. We also design a statistical centroid loss function to enhance the discriminative power of training process. Our SDML framework naturally preserves the diversity within an image set, and the relation between two sets. We evaluate our proposed approach on gait recognition and multi-shot person re-id. The experiment results show that SDML outperforms conventional DML, and also receives competitive/superior performance comparing to the previous state-of-the-arts on the aforementioned tasks.