The rat-a-gorical imperative: Moral intuition and the limits of affective learning

被引:48
作者
Greene, Joshua D. [1 ]
机构
[1] Harvard Univ, Dept Psychol, Ctr Brain Sci, Cambridge, MA 02138 USA
关键词
Deontology; Utilitarianism; Consequentialism; Reinforcement learning; Model-free learning; Machine learning; Ethics; Normative ethics; Moral judgment; VENTROMEDIAL PREFRONTAL CORTEX; DECISION-MAKING; EMOTIONAL DOG; GRID CELLS; JUDGMENT; AMYGDALA; HARM; TRUSTWORTHINESS; PSYCHOPATHY; REFLECTION;
D O I
10.1016/j.cognition.2017.03.004
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
Decades of psychological research have demonstrated that intuitive judgments are often unreliable, thanks to their inflexible reliance on limited information (Kahneman, 2003, 2011). Research on the computational underpinnings of learning, however, indicates that intuitions may be acquired by sophisticated learning mechanisms that are highly sensitive and integrative. With this in mind, Railton (2014) urges a more optimistic view of moral intuition. Is such optimism warranted? Elsewhere (Greene, 2013) I've argued that moral intuitions offer reasonably good advice concerning the give-and-take of everyday social life, addressing the basic problem of cooperation within a "tribe" ("Me vs. Us"), but that moral intuitions offer unreliable advice concerning disagreements between tribes with competing interests and values ("Us vs. Them"). Here I argue that a computational perspective on moral learning underscores these conclusions. The acquisition of good moral intuitions requires both good (representative) data and good (value-aligned) training. In the case of inter-tribal disagreement (public moral controversy), the problem of bad training looms large, as training processes may simply reinforce tribal differences. With respect to moral philosophy and the paradoxical problems it addresses, the problem of bad data looms large, as theorists seek principles that minimize counter-intuitive implications, not only in typical real-world cases, but in unusual, often hypothetical, cases such as some trolley dilemmas. In such cases the prevailing real-world relationships between actions and consequences are severed or reversed, yielding intuitions that give the right answers to the wrong questions. Such intuitions which we may experience as the voice of duty or virtue may simply reflect the computational limitations inherent in affective learning. I conclude, in optimistic agreement with Railton, that progress in moral philosophy depends on our having a better understanding of the mechanisms behind our moral intuitions. (C) 2017 Elsevier B.V. All rights reserved.
引用
收藏
页码:66 / 77
页数:12
相关论文
共 92 条
  • [1] INSTRUMENTAL RESPONDING FOLLOWING REINFORCER DEVALUATION
    ADAMS, CD
    DICKINSON, A
    [J]. QUARTERLY JOURNAL OF EXPERIMENTAL PSYCHOLOGY SECTION B-COMPARATIVE AND PHYSIOLOGICAL PSYCHOLOGY, 1981, 33 (MAY): : 109 - 121
  • [2] [Anonymous], 2000, Moral dumbfounding: When intuition finds no reason
  • [3] [Anonymous], 1994, Descartes' Error: Emotion, Reason, and the Human Brain
  • [4] [Anonymous], SOCIAL COGNITIVE AND, DOI DOI 10.1093/scan/nst102
  • [5] [Anonymous], JUSTICE
  • [6] [Anonymous], 2011, BETTER ANGELS OUR NA
  • [7] Aristotle, 1941, The Basic Works of Aristotle
  • [8] Emotion, decision making and the orbitofrontal cortex
    Bechara, A
    Damasio, H
    Damasio, AR
    [J]. CEREBRAL CORTEX, 2000, 10 (03) : 295 - 307
  • [9] Deciding advantageously before knowing the advantageous strategy
    Bechara, A
    Damasio, H
    Tranel, D
    Damasio, AR
    [J]. SCIENCE, 1997, 275 (5304) : 1293 - 1295
  • [10] Learning the value of information in an uncertain world
    Behrens, Timothy E. J.
    Woolrich, Mark W.
    Walton, Mark E.
    Rushworth, Matthew F. S.
    [J]. NATURE NEUROSCIENCE, 2007, 10 (09) : 1214 - 1221