Facial Expression Recognition (FER) remains a challenging task due to unconstrained conditions like variations in illumination, pose, and occlusion. Current FER approaches mainly focus on learning discriminative features through local attention and global perception of visual encoders, while neglecting the rich semantic information in the text modality. Additionally, these methods rely solely on the softmax-based activation layer for predictions, resulting in overconfident decision-making that hampers the effective handling of uncertain samples and relationships. Such insufficient representations and overconfident predictions degrade recognition performance, particularly in unconstrained scenarios. To tackle these issues, we propose an end-to-end FER framework called UA-FER, which integrates vision-language pre-training (VLP) models with evidential deep learning (EDL) theory to enhance recognition accuracy and robustness. Specifically, to identify multi-grained discriminative regions, we propose the Multi-granularity Feature Decoupling (MFD) module, which decouples global and local facial representations based on image-text affinity while distilling the universal knowledge from the pre-trained VLP models. Additionally, to mitigate misjudgments in uncertain visual-textual relationships, we introduce the Relation Uncertainty Calibration (RUC) module, which corrects these uncertainties using EDL theory. In this way, the model enhances its ability to capture emotion-related discriminative representations and tackle uncertain relationships, thereby improving overall recognition accuracy and robustness. Extensive experiments on in-the-wild and in-the-lab datasets demonstrate that our UA-FER outperforms the state-of-the-art models.