Point clouds, as the most prevalent representation of 3D data, are inherently disordered, unstructured, and discrete. Feature extraction from point clouds can be challenging, as objects with similar styles may be misclassified, and uncertain backgrounds or noise can significantly impact the performance of traditional classification models. To address these challenges, we introduce StyleContrast, a novel contrastive learning algorithm for style fusion. This approach effectively fuses styles of point clouds belonging to the same category across different domain datasets at the feature level, thus fulfilling the need for data enhancement. By aligning point clouds with their corresponding style-fused point clouds in the feature space, StyleContrast allows the feature extractor to learn style-independent invariant features. Moreover, our method incorporates category-centric contrastive loss to differentiate between similar objects from different categories. Experimental results demonstrate that StyleContrast achieves superior performance on Modelnet40, Shapenet-Part, and ScanObjectNN, surpassing all existing methods in terms of classification accuracy. Ablation experiments further confirm that our approach excels in point cloud feature analysis.