This article is about explainable artificial intelligence (XAI) for rule-based fuzzy systems [that can be expressed generically, as y(x) = f (x)]. It explains why it is not valid to explain the output of Mamdani or Takagi-Sugeno-Kang rule-based fuzzy systems using IF-THEN rules, and why it is valid to explain the output of such rule-based fuzzy systems as an association of the compound antecedents of a small subset of the original larger set of rules, using a phrase such as "these linguistic antecedents are symptomatic of this output." Importantly, it provides a novel multi-step approach to obtain such a small subset of rules for three kinds of fuzzy systems, and illustrates it by means of a very comprehensive example. It also explains why the choice for antecedent membership function shapes may be more critical for XAI than before XAI, why linguistic approximation and similarity are essential for XAI, and, it provides a way to estimate the quality of the explanations.