Centro de Investigación en Complejidad Social, Facultad de Gobierno, Universidad del Desarrollo, Santiago 7610658, Chile.
Department of Psychology, University of Montreal, Montreal, QC, Canada H3C 3J7.
Proc Natl Acad Sci U S A. 2022 Oct 18;119(42):e2214005119. doi: 10.1073/pnas.2214005119. Epub 2022 Oct 10.
How does the mind make moral judgments when the only way to satisfy one moral value is to neglect another? Moral dilemmas posed a recurrent adaptive problem for ancestral hominins, whose cooperative social life created multiple responsibilities to others. For many dilemmas, striking a balance between two conflicting values (a compromise judgment) would have promoted fitness better than neglecting one value to fully satisfy the other (an extreme judgment). We propose that natural selection favored the evolution of a cognitive system designed for making trade-offs between conflicting moral values. Its nonconscious computations respond to dilemmas by constructing "rightness functions": temporary representations specific to the situation at hand. A rightness function represents, in compact form, an ordering of all the solutions that the mind can conceive of (whether feasible or not) in terms of moral rightness. An optimizing algorithm selects, among the feasible solutions, one with the highest level of rightness. The moral trade-off system hypothesis makes various novel predictions: People make compromise judgments, judgments respond to incentives, judgments respect the axioms of rational choice, and judgments respond coherently to morally relevant variables (such as willingness, fairness, and reciprocity). We successfully tested these predictions using a new trolley-like dilemma. This dilemma has two original features: It admits both extreme and compromise judgments, and it allows incentives-in this case, the human cost of saving lives-to be varied systematically. No other existing model predicts the experimental results, which contradict an influential dual-process model.
当满足一种道德价值观的唯一方法是忽视另一种价值观时,大脑如何做出道德判断?道德困境是远古人类经常面临的适应性问题,因为他们的合作性社会生活使他们对他人负有多种责任。对于许多困境来说,在两种冲突的价值观之间取得平衡(妥协判断)会比忽视一种价值观以完全满足另一种价值观(极端判断)更有利于适应度。我们提出,自然选择有利于进化出一种用于在冲突的道德价值观之间进行权衡的认知系统。它的无意识计算通过构建“正确性函数”来应对困境:针对当前情况的临时表示。正确性函数以紧凑的形式表示,根据道德正确性对大脑能够想象的所有解决方案(无论是否可行)进行排序。优化算法在可行的解决方案中选择具有最高正确性水平的解决方案。道德权衡系统假说做出了各种新颖的预测:人们做出妥协判断,判断会对激励做出反应,判断会尊重理性选择的公理,并且判断会根据与道德相关的变量(如意愿、公平和互惠)做出一致的反应。我们使用一种新的类似于手推车的困境成功地测试了这些预测。这个困境有两个独特的特点:它既允许极端判断,也允许妥协判断,并且它允许激励因素——在这种情况下,是拯救生命的人力成本——系统地变化。没有其他现有的模型可以预测实验结果,这些结果与一个有影响力的双重过程模型相矛盾。