Amid the rapid evolution of artificial intelligence (AI), the intricate model structures and opaque decision-making processes of AI-based systems have raised the trustworthy issues in education. We, therefore, first propose a novel three-layer knowledge tracing model designed to address trustworthiness for an intelligent tutoring system. Each layer is crafted to tackle a specific challenge: transparency, explainability, and accountability. We have introduced an explainable AI (xAI) approach to offer technical interpreting information, validated by the established educational theories and principles. The validated interpreting information is subsequently transitioned from its technical context into educational insights, which are then incorporated into the newly designed user interface. Our evaluations indicate that an intelligent tutoring system, when equipped with the designed trustworthy knowledge tracing model, significantly enhances user trust and knowledge from the perspectives of both teachers and students. This study, thus, contributes a tangible solution that utilizes the xAI approach as the enabling technology to construct trustworthy systems or tools in education.