Department of Cognitive Science, Carleton University, Ottawa, K1S 5B6, Canada.
Sci Rep. 2024 Sep 17;14(1):21733. doi: 10.1038/s41598-024-68024-3.
People will make different moral judgments in similar moral dilemmas where one can act to sacrifice some number of lives to save several more. Research has shown that although people can reason that an action would save more lives, automatic processes can overwrite deliberate reasoning. Having participants imagine hypothetical moral dilemmas, researchers have discovered that factors such as action/omission, means/side-effect, and personal/impersonal can affect judgment. Joshua Greene suggests that these features do not affect people's judgment because they are morally relevant but are instead a result of the myopic nature of the automatic moral process. Greene hypothesizes that there is some myopic module or domain-general process that attaches a negative emotional response to an action when one is contemplating violent actions. In the present research a model of this myopic automatic process is paired with an analytic system to replicate deontological and utilitarian responses to moral dilemmas. Our system, MERDJ, models this in simulated spiking neurons. The system takes in representations of specific moral dilemmas as inputs and outputs judgments of appropriate or inappropriate.
人们在类似的道德困境中会做出不同的道德判断,在这种困境中,人们可以采取行动牺牲一些生命来拯救更多的生命。研究表明,尽管人们可以推理出一个行动将拯救更多的生命,但自动过程可以覆盖深思熟虑的推理。通过让参与者想象假设的道德困境,研究人员发现,行动/不作为、手段/副作用以及个人/非个人等因素会影响判断。约书亚·格林(Joshua Greene)认为,这些特征并不会影响人们的判断,因为它们与道德有关,而是自动道德过程的短视性质的结果。格林假设,存在某种短视模块或通用领域过程,当人们考虑暴力行为时,会对行动产生负面情绪反应。在本研究中,这种短视自动过程的模型与分析系统配对,以复制道德困境的道义论和功利主义反应。我们的系统 MERDJ 在模拟尖峰神经元中对其进行建模。该系统将特定道德困境的表示作为输入,并输出适当或不适当的判断。