Domain generalization aims to generalize knowledge to target domains not seen during the training phase, even in domain gaps. However, in complex industrial settings, the emergence of new fault types is frequent. Concurrently, the rarity of these faults means that the data collected may not fully capture the entire range of potential fault conditions. Asa result, it is challenging to ensure that there is an overlap between the label sets of the multi-source domains and the unseen target domains. This problem requires no prior knowledge of label sets, and it requires a model to learn from multi-source domains and perform well on unknown target domains. In this paper, we propose a Domain-Private-Suppress Meta-Recognition Network (DPSMR). It quantifies channel-level transferability to continuously enhance the robustness of channels to domain shifts, thereby promoting the generalization of a common label set. Using an enhanced meta-recognition calibration algorithm to avoid overconfidence in neural network predictions, we ensure the successful recognition of private samples. By employing dual-consistency loss, we reduce channel instability and facilitate learning domain-invariant features. Experimental results on two multi-domain datasets demonstrate that DPSMR outperforms the state-of-the-art methods.