Multi-view Clustering (MVC) has gained significant attention in recent years due to its ability to explore consensus information from multiple perspectives. However, traditional MVC methods face two major challenges: (1) how to alleviate the representation degeneration caused by the process of achieving multi-view consensus information, and (2) how to learn discriminative representations with clustering-friendly structures. Most existing MVC methods overlook the importance of inter-cluster separability. To address these issues, we propose a novel Contrastive Learning-based Dual Contrast Mechanism Deep Multi-view Clustering Network. Specifically, we first introduce view-specific autoencoders to extract latent features for each individual view. Then, we obtain consensus information across views through global feature fusion, measuring the pairwise representation discrepancy by maximizing the consistency between the view-specific representations and global feature representations. Subsequently, we design an adaptive weighted mechanism that can automatically enhance the useful views in feature fusion while suppressing unreliable views, effectively mitigating the representation degeneration issue. Furthermore, within the Contrastive Learning framework, we introduce a Dynamic Cluster Diffusion (DC) module that maximizes the distance between different clusters, thus enhancing the separability of the clusters and obtaining a clustering-friendly discriminative representation. Extensive experiments on multiple datasets demonstrate that our method not only achieves state-of-the-art clustering performance but also produces clustering structures with better separability.