In the current landscape of Compositional Zero-Shot Learning (CZSL) methods that leverage CLIP, predominant approach is based on prompt learning paradigms. These methods encounter significant putational complexity when dealing with a large number of categories. Additionally, when confronted new classification tasks, there is a necessity to learn the prompts again, which can be both time-consuming and resource-intensive. To address these challenges, We present anew methodology, named the M ixture P retrained E xpert (MoPE), for enhancing Compositional Zero-shot Learning through Logit-Level Fusion Multi Expert Fusion Module. The MoPE skillfully blends the benefits of extensive pre-trained models like Bert, GPT-3 and Word2Vec for effectively tackling Compositional Zero-shot Learning. Firstly, we extract text label space for each language model individually, then map the visual feature vectors to their respective text spaces. This maintains the integrity and structure of the original text space. During this process, pre-trained expert parameters are kept frozen. The mapping of visual features to the corresponding text spaces is subject to learning and could be considered as multiple learnable visual experts. In the model fusion phase, we propose anew fusion strategy that features a gating mechanism that adjusts the contributions of various models dynamically. This enables our approach to adapt more effectively to a range of tasks and data sets. method's robustness is demonstrated by the fact that the language model is not tailored to specific downstream task datasets or losses. This preserves the larger model's topology and expands the potential for application. Preliminary experiments conducted on the UT-Zappos, AO-Clever, and C-GQA datasets indicate that MoPE performs competitively when compared to existing techniques.