Recent methods for learning with noisy labels often mitigate the effects of noisy labels by sample selection and label correction. However, high feature similarity between classes can reduce the effectiveness of these methods. In this paper, we propose a learning method that uses contrastive learning to explicitly disentangle features of highly similar classes in the feature space. Specifically, we first compute the similarity between classes to identify similar classes. Next, we introduce a new loss function that separates the features of similar class samples in the feature space. This solves the problem of the mixing of similar classes, which affected previous methods. Our proposed method can easily be integrated into the loss functions of various existing methods. Experiments on CIFAR-10, CIFAR-100, WebVision, and Clothing1M show our method achieves high accuracy on datasets with various noise patterns, outperforming existing methods significantly at high noise rates.