Banks Jaime
College of Media & Communication, Texas Tech University, Lubbock, TX, United States.
Front Robot AI. 2021 May 28;8:670503. doi: 10.3389/frobt.2021.670503. eCollection 2021.
Moral status can be understood along two dimensions: moral agency [capacities to be and do good (or bad)] and moral patiency (extents to which entities are objects of moral concern), where the latter especially has implications for how humans accept or reject machine agents into human social spheres. As there is currently limited understanding of how people innately understand and imagine the moral patiency of social robots, this study inductively explores key themes in how robots may be subject to humans' (im)moral action across 12 valenced foundations in the moral matrix: care/harm, fairness/unfairness, loyalty/betrayal, authority/subversion, purity/degradation, liberty/oppression. Findings indicate that people can imagine clear dynamics by which anthropomorphic, zoomorphic, and mechanomorphic robots may benefit and suffer at the hands of humans (e.g., affirmations of personhood, compromising bodily integrity, veneration as gods, corruption by physical or information interventions). Patterns across the matrix are interpreted to suggest that moral patiency may be a function of whether people diminish or uphold the ontological boundary between humans and machines, though even moral upholdings bare notes of utilitarianism.
道德能动性[做好事(或坏事)的能力]和道德受动性(实体成为道德关怀对象的程度),其中后者尤其关乎人类如何接纳或拒绝机器智能体进入人类社会领域。由于目前人们对人类如何本能地理解和想象社交机器人的道德受动性了解有限,本研究通过归纳法探讨了机器人在道德矩阵的12个效价基础上如何可能遭受人类(不)道德行为影响的关键主题:关爱/伤害、公平/不公平、忠诚/背叛、权威/颠覆、纯洁/堕落、自由/压迫。研究结果表明,人们可以想象出清晰的动态过程,拟人化、拟兽化和拟机器化的机器人可能在人类手中受益或受苦(例如,对人格的肯定、损害身体完整性、奉为神明、受到物理或信息干预的腐蚀)。对整个矩阵的模式进行解读后表明,道德受动性可能取决于人们是否缩小或维护人类与机器之间的本体论界限,不过即使是对道德的维护也带有功利主义的色彩。