Department of Development and Regeneration, KU Leuven, Leuven, Belgium.
Department of Biomedical Data Sciences, Leiden University Medical Center (LUMC), Leiden, The Netherlands.
J Am Med Inform Assoc. 2019 Dec 1;26(12):1651-1654. doi: 10.1093/jamia/ocz130.
There is increasing awareness that the methodology and findings of research should be transparent. This includes studies using artificial intelligence to develop predictive algorithms that make individualized diagnostic or prognostic risk predictions. We argue that it is paramount to make the algorithm behind any prediction publicly available. This allows independent external validation, assessment of performance heterogeneity across settings and over time, and algorithm refinement or updating. Online calculators and apps may aid uptake if accompanied with sufficient information. For algorithms based on "black box" machine learning methods, software for algorithm implementation is a must. Hiding algorithms for commercial exploitation is unethical, because there is no possibility to assess whether algorithms work as advertised or to monitor when and how algorithms are updated. Journals and funders should demand maximal transparency for publications on predictive algorithms, and clinical guidelines should only recommend publicly available algorithms.
人们越来越意识到,研究的方法和结果应该是透明的。这包括使用人工智能开发预测算法的研究,这些算法可以对个体进行诊断或预后风险预测。我们认为,至关重要的是要公开任何预测背后的算法。这允许独立的外部验证,评估在不同环境和时间下的性能异质性,并进行算法改进或更新。如果有足够的信息,在线计算器和应用程序可能会有助于推广。对于基于“黑盒”机器学习方法的算法,算法实现的软件是必须的。出于商业开发的目的而隐藏算法是不道德的,因为无法评估算法是否按预期工作,也无法监控算法何时以及如何更新。期刊和资助者应该要求对预测算法的出版物进行最大程度的透明化,临床指南也应该只推荐公开可用的算法。