There are notable style differences among person re-identification (ReID) datasets, such as brightness, tone, resolution, background, and clothing style, that result in serious challenges for cross-domain person ReID. Two methods are usually used to solve these problems. One is to remove style differences between datasets by applying specific modules, such as instance normalization (IN). However, this method will filter out large amounts of valuable information for ReID. The other is to use person attributes as auxiliary information, but this method does not deeply explore the relationship between attribute features and global features, resulting in underutilized attribute information. We propose the domain-invariant feature extraction and fusion (DFEF), which consists of the attention and style normalization (ASN) and the attribute feature extraction and fusion (AFEF). The ASN module integrates spatial and channel attention modules on the basis of the IN layer to effectively remove the style differences between datasets and recovers the filtered-out information, which is useful for ReID. The AFEF module includes the attribute branch and the feature fusion module. For the attribute branch, we embed the convolutional block attention module (CBAM) into the attribute branch and adopt the multi-label focal loss (MLFL) to improve the accuracy of attribute recognition. For the feature fusion module, we propose the dispersion reweighting strategy to explore the correlation between attribute features and global features. The proposed DFEF method achieves 30.1% and 35.0% mAP on Market-1501 -> DukeMTMC-reID and DukeMTMC-reID -> Market-1501, respectively.