Dubey Akshat, Yang Zewen, Hattab Georges
Center for Artificial Intelligence in Public Health Research (ZKI-PH) at Robert Koch Institute, Nordufer 20, 13353 Berlin, Germany.
Department of Mathematics and Computer Science, Freie Universität Berlin, Arnimallee 14, 14195 Berlin, Germany.
iScience. 2024 Jul 30;27(9):110603. doi: 10.1016/j.isci.2024.110603. eCollection 2024 Sep 20.
The growing AI field faces trust, transparency, fairness, and discrimination challenges. Despite the need for new regulations, there is a mismatch between regulatory science and AI, preventing a consistent framework. A five-layer nested model for AI design and validation aims to address these issues and streamline AI application design and validation, improving fairness, trust, and AI adoption. This model aligns with regulations, addresses AI practitioners' daily challenges, and offers prescriptive guidance for determining appropriate evaluation approaches by identifying unique validity threats. We have three recommendations motivated by this model: (1) Authors should distinguish between layers when claiming contributions to clarify the specific areas in which the contribution is made and to avoid confusion; (2) authors should explicitly state upstream assumptions to ensure that the context and limitations of their AI system are clearly understood, (3) AI venues should promote thorough testing and validation of AI systems and their compliance with regulatory requirements.
不断发展的人工智能领域面临着信任、透明度、公平性和歧视等挑战。尽管需要新的法规,但监管科学与人工智能之间存在不匹配,阻碍了统一框架的形成。一个用于人工智能设计和验证的五层嵌套模型旨在解决这些问题,简化人工智能应用设计和验证,提高公平性、信任度以及人工智能的采用率。该模型符合法规要求,解决了人工智能从业者的日常挑战,并通过识别独特的有效性威胁,为确定适当的评估方法提供了规范性指导。基于此模型,我们提出三项建议:(1)作者在声称做出贡献时应区分不同层次,以明确做出贡献的具体领域并避免混淆;(2)作者应明确阐述上游假设,以确保其人工智能系统的背景和局限性得到清晰理解;(3)人工智能相关场所应推动对人工智能系统进行全面测试和验证,并确保其符合监管要求。