Our ability to evaluate an experience retrospectively is important because it allows us to summarize its total value, and this summary value can then later be used as a guide in deciding whether the experience merits repeating, or whether instead it should rather be avoided. However, when an experience unfolds over time, humans tend to assign disproportionate weight to the later part of the experience, and this can lead to poor choice in repeating, or avoiding experience. Using model-based computational analyses of fMRI recordings in 27 male volunteers, we show that the human brain encodes the summary value of an extended sequence of outcomes in two distinct reward representations. We find that the overall experienced value is encoded accurately in the amygdala, but its merit is excessively marked down by disincentive anterior insula activity if the sequence of experienced outcomes declines temporarily. Moreover, the statistical strength of this neural code can separate efficient decision-makers from suboptimal decision-makers. Optimal decision-makers encode overall value more strongly, and suboptimal decision-makers encode the disincentive markdown (DM) more strongly. The separate neural implementation of the two distinct reward representations confirms that suboptimal choice for temporally extended outcomes can be the result of robust neural representation of a displeasing aspect of the experience such as temporary decline.