Domain generalization person re-identification (DG ReID) is designed to deploy supervised training models over different source domains to unseen target domains without fine-tuning. The source domains and the target domains have significantly different domain spacing. At the same time, the existing methods overfit the source domain distribution and cannot generalize well to the unseen domains. This paper proposes a DG ReID model based on Instance Style-aware Transformer (ISTDG), focusing on cross-domain hard sampling, style exactmixing, and attention-aware ViT to enhance the generalization ability. To implement cross-domain hard sampling and improve the efficiency of mini-batch training, we design a Cross-domain Instance-aware Hard-mining (CIH) module to mine the hard-batch and make the model to learn the domain generalization features. To enhance data augmentation and mix styles across instances in the channel dimension, we introduce a Style ExactMixing method (SEMix) that aligns the feature distributions of both source and target instances, thereby generating new instances that are pertinent to intra-batch style. To enable ViT to utilize the self-attention mechanism and focus on local fine-grained discriminant features associated with persons, we design an Attention-aware-based Local Feature Alignment (ALFA) module and attach it to the predicted layer of ViT. Extensive experiments on both single-source and multi-source environments verify that the performance of the proposed ISTDG significantly outperforms State-of-the-art (SOTA) methods.