Reinke Annika, Tizabi Minu D, Baumgartner Michael, Eisenmann Matthias, Heckmann-Nötzel Doreen, Kavur A Emre, Rädsch Tim, Sudre Carole H, Acion Laura, Antonelli Michela, Arbel Tal, Bakas Spyridon, Benis Arriel, Blaschko Matthew, Buettner Florian, Cardoso M Jorge, Cheplygina Veronika, Chen Jianxu, Christodoulou Evangelia, Cimini Beth A, Collins Gary S, Farahani Keyvan, Ferrer Luciana, Galdran Adrian, van Ginneken Bram, Glocker Ben, Godau Patrick, Haase Robert, Hashimoto Daniel A, Hoffman Michael M, Huisman Merel, Isensee Fabian, Jannin Pierre, Kahn Charles E, Kainmueller Dagmar, Kainz Bernhard, Karargyris Alexandros, Karthikesalingam Alan, Kenngott Hannes, Kleesiek Jens, Kofler Florian, Kooi Thijs, Kopp-Schneider Annette, Kozubek Michal, Kreshuk Anna, Kurc Tahsin, Landman Bennett A, Litjens Geert, Madani Amin, Maier-Hein Klaus, Martel Anne L, Mattson Peter, Meijering Erik, Menze Bjoern, Moons Karel G M, Müller Henning, Nichyporuk Brennan, Nickel Felix, Petersen Jens, Rafelski Susanne M, Rajpoot Nasir, Reyes Mauricio, Riegler Michael A, Rieke Nicola, Saez-Rodriguez Julio, Sánchez Clara I, Shetty Shravya, van Smeden Maarten, Summers Ronald M, Taha Abdel A, Tiulpin Aleksei, Tsaftaris Sotirios A, Calster Ben Van, Varoquaux Gaël, Wiesenfarth Manuel, Yaniv Ziv R, Jäger Paul F, Maier-Hein Lena
ArXiv. 2024 Feb 23:arXiv:2302.01790v4.
Validation metrics are key for the reliable tracking of scientific progress and for bridging the current chasm between artificial intelligence (AI) research and its translation into practice. However, increasing evidence shows that particularly in image analysis, metrics are often chosen inadequately in relation to the underlying research problem. This could be attributed to a lack of accessibility of metric-related knowledge: While taking into account the individual strengths, weaknesses, and limitations of validation metrics is a critical prerequisite to making educated choices, the relevant knowledge is currently scattered and poorly accessible to individual researchers. Based on a multi-stage Delphi process conducted by a multidisciplinary expert consortium as well as extensive community feedback, the present work provides the first reliable and comprehensive common point of access to information on pitfalls related to validation metrics in image analysis. Focusing on biomedical image analysis but with the potential of transfer to other fields, the addressed pitfalls generalize across application domains and are categorized according to a newly created, domain-agnostic taxonomy. To facilitate comprehension, illustrations and specific examples accompany each pitfall. As a structured body of information accessible to researchers of all levels of expertise, this work enhances global comprehension of a key topic in image analysis validation.
验证指标对于可靠地跟踪科学进展以及弥合当前人工智能(AI)研究与实际应用之间的差距至关重要。然而,越来越多的证据表明,特别是在图像分析中,指标的选择往往与潜在的研究问题不匹配。这可能归因于与指标相关的知识难以获取:虽然考虑验证指标的个体优势、劣势和局限性是做出明智选择的关键前提,但目前相关知识分散,个体研究人员难以获取。基于多学科专家联盟进行的多阶段德尔菲法以及广泛的社区反馈,本研究提供了首个可靠且全面的通用信息获取点,内容涉及图像分析中与验证指标相关的陷阱。本研究聚焦于生物医学图像分析,但也有可能应用于其他领域,所涉及的陷阱具有跨应用领域的普遍性,并根据新创建的、与领域无关的分类法进行分类。为便于理解,每个陷阱都配有插图和具体示例。作为一个可供各级专业研究人员获取的结构化信息体系,本研究增强了全球对图像分析验证这一关键主题的理解。