For Multi-Agent Systems (MAS) considered as complex systems, it is important to make agent reasoning comprehensible to users. In fact, MAS execution is uncontrollable; no one knows what effectively happens inside and how the solutions are given. In this context, our work aims to associate an explanation system with a multi-agent system. It consists first in intercepting significant events related to agents at runtime, thus an explanatory knowledge acquisition phase is performed where knowledge attributes are stored in an explanation structure named KAGR. Second, the semantic links between these attributes are expressed in an extended causal map model that constitutes knowledge representation formalism. Finally, an interpretation of this model is fulfilled using predicate logic. So, we get an acquisition, a representation, an interpretation of explanatory knowledge building up a knowledge based system for explanation. In this paper, we broach the interpretation phase where an explanation language is described to interpret the built causal maps.