University Medicine Greifswald.
University of Tübingen.
Am J Bioeth. 2021 Jul;21(7):4-20. doi: 10.1080/15265161.2020.1863515. Epub 2021 Jan 4.
The development of artificial intelligence (AI) in medicine raises fundamental ethical issues. As one example, AI systems in the field of mental health successfully detect signs of mental disorders, such as depression, by using data from social media. These AI depression detectors (AIDDs) identify users who are at risk of depression prior to any contact with the healthcare system. The article focuses on the ethical implications of AIDDs regarding affected users' health-related autonomy. Firstly, it presents the (ethical) discussion of AI in medicine and, specifically, in mental health. Secondly, two models of AIDDs using social media data and different usage scenarios are introduced. Thirdly, the concept of patient autonomy, according to Beauchamp and Childress, is critically discussed. Since this concept does not encompass the specific challenges linked with the digital context of AIDDs in social media sufficiently, the current analysis suggests, finally, an extended concept of health-related digital autonomy.
人工智能(AI)在医学领域的发展引发了一些基本的伦理问题。例如,在心理健康领域,人工智能系统通过使用社交媒体数据成功地检测出抑郁等精神障碍的迹象。这些 AI 抑郁探测器(AIDDs)在与医疗保健系统接触之前就能识别出有抑郁风险的用户。本文重点讨论了 AIDDs 对受影响用户健康相关自主权的伦理影响。首先,它介绍了(伦理)关于 AI 在医学,特别是在心理健康方面的讨论。其次,介绍了两种使用社交媒体数据和不同使用场景的 AIDDs 模型。第三,根据比彻姆和邱彻斯的观点,批判性地讨论了患者自主权的概念。由于这个概念没有充分涵盖与社交媒体中 AIDDs 的数字背景相关的具体挑战,因此最后,当前的分析建议采用一个扩展的健康相关数字自主权概念。