Holzinger Andreas, Zatloukal Kurt, Müller Heimo
Human-Centered AI Lab, Institute of Forest Engineering, Department for Ecosystem Management, Climate and Biodiversity, University of Natural Resources and Life Sciences Vienna, Austria; Information Science and Machine Learning Group, Diagnostic and Research Institute of Pathology, Medical University Graz, Austria.
Information Science and Machine Learning Group, Diagnostic and Research Institute of Pathology, Medical University Graz, Austria.
N Biotechnol. 2025 Mar 25;85:59-62. doi: 10.1016/j.nbt.2024.12.003. Epub 2024 Dec 13.
The rapid proliferation of artificial intelligence (AI) systems across diverse domains raises critical questions about the feasibility of meaningful human oversight, particularly in high-stakes domains such as new biotechnology. As AI systems grow increasingly complex, opaque, and autonomous, ensuring responsible use becomes a formidable challenge. During our editorial work for the special issue "Artificial Intelligence for Life Sciences", we placed increasing emphasis on the topic of "human oversight". Consequently, in this editorial we briefly discuss the evolving role of human oversight in AI governance, focusing on the practical, technical, and ethical dimensions of maintaining control. It examines how the complexity of contemporary AI architectures, such as large-scale neural networks and generative AI applications, undermine human understanding and decision-making capabilities. Furthermore, it evaluates emerging approaches-such as explainable AI (XAI), human-in-the-loop systems, and regulatory frameworks-that aim to enable oversight while acknowledging their limitations. Through a comprehensive analysis, the picture emerged while complete oversight may no longer be viable in certain contexts, strategic interventions leveraging human-AI collaboration and trustworthy AI design principles can preserve accountability and safety. The discussion highlights the urgent need for interdisciplinary efforts to rethink oversight mechanisms in an era where AI may outpace human comprehension.
人工智能(AI)系统在各个领域的迅速扩散引发了关于有意义的人类监督可行性的关键问题,尤其是在新生物技术等高风险领域。随着人工智能系统变得越来越复杂、不透明和自主,确保其负责任的使用成为一项艰巨的挑战。在我们为《生命科学中的人工智能》特刊进行编辑工作期间,我们越来越重视“人类监督”这一主题。因此,在这篇社论中,我们简要讨论人类监督在人工智能治理中不断演变的作用,重点关注维持控制的实践、技术和伦理层面。它探讨了当代人工智能架构的复杂性,如大规模神经网络和生成式人工智能应用,如何削弱人类的理解和决策能力。此外,它评估了新兴方法,如可解释人工智能(XAI)、人在回路系统和监管框架,这些方法旨在实现监督,同时也承认它们的局限性。通过全面分析得出的情况是,虽然在某些情况下完全监督可能不再可行,但利用人类与人工智能协作和可靠人工智能设计原则的战略干预措施可以保持问责制和安全性。讨论强调了在人工智能可能超越人类理解的时代,迫切需要跨学科努力来重新思考监督机制。