Svensson Ellen, Osika Walter, Carlbring Per
Department of Philosophy, Stockholm University, Stockholm, Sweden.
TEA Lab (Trustworthy and Ethical AI Lab), Center for Social Sustainability, Department of Neurobiology, Care Science and Society, Karolinska Institutet, Stockholm, Sweden.
Internet Interv. 2025 Jun 10;41:100844. doi: 10.1016/j.invent.2025.100844. eCollection 2025 Sep.
The use of AI in digital mental healthcare promises to make treatments more effective, accessible, and scalable than ever before. At the same time, the use of AI opens a myriad of ethical concerns, including the lack of transparency, the risk of bias leading to increasing social inequalities, and the risk of responsibility gaps. This raises a crucial question: Can we rely on these systems to deliver care that is both ethical and effective? In attempts to regulate and ensure the safe usage of AI-powered tools, calls to trustworthy AI systems have become central. However, the use of terms such as "trust" and "trustworthiness" risks increasing anthropomorphization of AI systems, attaching human moral activities, such as trust, to artificial systems. In this article, we propose that terms such as "trustworthiness" be used with caution regarding AI and that when used, they should reflect an AI system's ability to consistently demonstrate measurable adherence to ethical principles, such as respect for human autonomy, nonmaleficence, fairness, and transparency. On this approach, trustworthy and ethical AI has the possibility of becoming a tangible goal rather than wishful thinking.
在数字精神医疗保健中使用人工智能有望使治疗比以往任何时候都更有效、更易获得且更具可扩展性。与此同时,人工智能的使用引发了无数伦理问题,包括缺乏透明度、存在导致社会不平等加剧的偏见风险以及责任缺失风险。这就引出了一个关键问题:我们能否依靠这些系统提供既符合伦理又有效的护理?为了规范并确保人工智能工具的安全使用,对可信人工智能系统的呼吁已成为核心。然而,使用“信任”和“可信性”等术语有可能加剧人工智能系统的拟人化,将诸如信任等人类道德行为赋予人工系统。在本文中,我们建议在涉及人工智能时谨慎使用“可信性”等术语,并且在使用时,它们应反映人工智能系统始终如一地展示对伦理原则(如尊重人类自主性、不伤害、公平和透明度)的可衡量遵守情况的能力。按照这种方法,可信且符合伦理的人工智能有可能成为一个切实可行的目标,而非一厢情愿的想法。