Deep learning has achieved outstanding performance in natural language processing, but actuality has witnessed its fragility against adversarial attacks. Synonyms-based attacks are most disastrous since their generated samples approximate raw inputs. Several countermeasures have been proposed in the literature, but the defense effectiveness is unsatisfactory because of the clumsy single-granularity synonyms clustering. To mitigate this dilemma, we propose a Granular-Ball Sample Enhancement-based defense Framework (GBSEF) for text adversarial attacks. Specifically, GBSEF first adopts an effective general synonyms clustering algorithm, which can adaptively adjust the granularity of synonym sets (i.e., granular-balls) for diverse datasets. Regarding each ball as a dot, the function consisting of most dots well fits the original data distribution, resulting in the relationships among words being well presented by the granular-balls. GBSEF then replaces each input word with the center vector of its subordinate ball, to construct robust samples preserving syntax and semantic information simultaneously. Finally, GBSEF combines a random substitution mechanism with granular-balls. This way can prompt GBSEF to take full advantage of the multi-granularity feature of granular-balls, to get more diverse valid samples. GBSEF obtains great performance through training on these samples. Abundant evaluations demonstrate the robustness and effectiveness of GBSEF against adversarial attacks, albeit with a slight performance decrease under normal scenarios without attacks. Meanwhile, GBSEF has good transferability against adversarial samples. Compared with state-of-art defense countermeasures, under multiple attacks on four neural network models (i.e., CNN, LSTM, Bi-LSTM, BERT), GBSEF always outperforms existing baselines.