McKernan Lindsey C, Clayton Ellen W, Walsh Colin G
Department of Psychiatry and Behavioral Sciences, Vanderbilt University Medical Center, Nashville, TN, United States.
Department of Physical Medicine and Rehabilitation, Vanderbilt University Medical Center, Nashville, TN, United States.
Front Psychiatry. 2018 Dec 3;9:650. doi: 10.3389/fpsyt.2018.00650. eCollection 2018.
In the United States, suicide increased by 24% in the past 20 years, and suicide risk identification at point-of-care remains a cornerstone of the effort to curb this epidemic (1). As risk identification is difficult because of symptom under-reporting, timing, or lack of screening, healthcare systems rely increasingly on risk scoring and now artificial intelligence (AI) to assess risk. AI remains the science of solving problems and accomplishing tasks, through automated or computational means, that normally require human intelligence. This science is decades-old and includes traditional predictive statistics and machine learning. Only in the last few years has it been applied rigorously in suicide risk prediction and prevention. Applying AI in this context raises significant ethical concern, particularly in balancing beneficence and respecting personal autonomy. To navigate the ethical issues raised by suicide risk prediction, we provide recommendations in three areas-communication, consent, and controls-for both providers and researchers (2).
在美国,自杀率在过去20年中上升了24%,而在医疗点进行自杀风险识别仍然是遏制这一流行病努力的基石(1)。由于症状报告不足、时机问题或缺乏筛查,风险识别很困难,医疗系统越来越依赖风险评分以及现在的人工智能(AI)来评估风险。人工智能仍然是通过自动化或计算手段解决通常需要人类智能才能解决的问题和完成任务的科学。这门科学已有数十年历史,包括传统的预测统计和机器学习。只是在过去几年中,它才被严格应用于自杀风险预测和预防。在这种情况下应用人工智能引发了重大的伦理问题,特别是在平衡行善和尊重个人自主权方面。为了应对自杀风险预测引发的伦理问题,我们针对提供者和研究人员在沟通、同意和控制这三个领域提供了建议(2)。