Designing AI systems with cognitive and affective human-like abilities is not without risks. While in normative language or from legal theory, the responsibility falls on whoever breaks the law, human agents have developed different strategies to avoid responsibility, justifying their behavior. This type of justification in the form of explanations has in common that they obey a post hoc story supporting their behavior or choice, which could contribute to what is known as cognitive bias. However, this research focuses not on bias but on analyzing and providing a solution to another source of human error called noise. This concept, taken from Kahneman and his colleagues, refers to the unwanted variability in judgments that should ideally be identical. Emotions are one of the primary sources of noise that these authors identify in human behavior. However, not all of them can be considered equally as noise sources. An example that contradicts this theory is the social cognition and behavioral sciences studies that point to regret's essential role in assigning and assuming responsibility. This paper proposes an interdisciplinary theoretical framework for Responsible AI called "The Ladder of Regret," divided into three levels in which human and artificial agents cooperate within goal-driven XAI. The aim is to create a Theory of Mind about the normative value of regret to illustrate whether the somatic marker hypothesis driven by the counterfactual component of the anticipated regret could serve as a recommendation norm for preventing unspecified errors before they occur.