Muyskens Kathryn, Ma Yonghui, Menikoff Jerry, Hallinan James, Savulescu Julian
Centre for Biomedical Ethics, Yong Loo Lin School of Medicine, National University of Singapore, Singapore.
Centre for Bioethics, Xiamen University, Xiamen, China.
Asian Bioeth Rev. 2024 May 15;17(1):207-223. doi: 10.1007/s41649-024-00290-9. eCollection 2025 Jan.
Artificial intelligence (AI) has attracted an increasing amount of attention, both positive and negative. Its potential applications in healthcare are indeed manifold and revolutionary, and within the realm of medical imaging and radiology (which will be the focus of this paper), significant increases in accuracy and speed, as well as significant savings in cost, stand to be gained through the adoption of this technology. Because of its novelty, a norm of keeping humans "in the loop" wherever AI mechanisms are deployed has become synonymous with good ethical practice in some circles. It has been argued that keeping humans "in the loop" is important for reasons of safety, accountability, and the maintenance of institutional trust. However, as the application of machine learning for the detection of lumbar spinal stenosis (LSS) in this paper's case study reveals, there are some scenarios where an insistence on keeping humans in the loop (or in other words, the resistance to automation) seems unwarranted and could possibly lead us to miss out on very real and important opportunities in healthcare-particularly in low-resource settings. It is important to acknowledge these opportunity costs of resisting automation in such contexts, where better options may be unavailable. Using an AI model based on convolutional neural networks developed by a team of researchers at NUH/NUS medical school in Singapore for automated detection and classification of the lumbar spinal canal, lateral recess, and neural foraminal narrowing in an MRI scan of the spine to diagnose LSS, we will aim to demonstrate that where certain criteria hold (e.g., the AI is as accurate or better than human experts, risks are low in the event of an error, the gain in wellbeing is significant, and the task being automated is not essentially or importantly human), it is both morally permissible and even desirable to kick the humans out of the loop.
人工智能(AI)已引发了越来越多的关注,有积极的也有消极的。它在医疗保健领域的潜在应用确实是多方面且具有变革性的,在医学成像和放射学领域(这将是本文的重点),通过采用这项技术有望显著提高准确性和速度,并大幅节省成本。由于其新颖性,在一些圈子里,无论在何处部署人工智能机制,让人类“参与其中”已成为良好道德实践的代名词。有人认为,出于安全、问责和维护机构信任的原因,让人类“参与其中”很重要。然而,正如本文案例研究中机器学习在腰椎管狭窄症(LSS)检测中的应用所揭示的那样,在某些情况下,坚持让人类参与其中(或者换句话说,抵制自动化)似乎毫无道理,并且可能会使我们错过医疗保健领域非常真实且重要的机会,尤其是在资源匮乏的环境中。在可能没有更好选择的情况下,认识到在这种背景下抵制自动化所带来的这些机会成本很重要。我们将使用新加坡国立大学医学院/新加坡国立大学研究团队开发的基于卷积神经网络的人工智能模型,对脊柱MRI扫描中的腰椎管、侧隐窝和神经孔狭窄进行自动检测和分类,以诊断LSS,旨在证明在某些条件成立的情况下(例如,人工智能与人类专家一样准确或更准确,出错时风险较低,福祉增益显著,且正在自动化的任务本质上或重要方面并非人类专属),将人类排除在循环之外在道德上是允许的,甚至是可取的。