Suppr超能文献

[医学人工智能应用开发中的参与式方法:机遇与挑战]

[Participatory approaches in the development of AI applications in medicine: opportunities and challenges].

作者信息

Heizmann Carolin, Gleim Patricia, Kellmeyer Philipp

机构信息

Data and Web Science Group, Fakultät für Wirtschaftsinformatik und Wirtschaftsmathematik, Universität Mannheim, B6, 26, 68159, Mannheim, Deutschland.

Human-Technology Interaction Lab, Klinik für Neurochirurgie, Universitätsklinikum Freiburg, Freiburg im Breisgau, Deutschland.

出版信息

Bundesgesundheitsblatt Gesundheitsforschung Gesundheitsschutz. 2025 Jun 30. doi: 10.1007/s00103-025-04095-5.

Abstract

The increasing integration of artificial intelligence (AI) in healthcare not only holds the potential for efficiency gains, personalized medicine, and evidence-based decisions but also raises ethical and social challenges, such as bias, lack of transparency, and acceptance. Participatory approaches that actively involve patients, physicians, caregivers, and other stakeholders in the development process make it possible to align technological innovations with actual needs and to design them in a socially just way.The analysis distinguishes between participation as active co-design and partaking as access to social resources. Theoretical models such as the "ladder of participation" (Arnstein) illustrate the different levels of participation. In addition, methodological approaches such as action research, community-based participatory research, ethics by design, and value-sensitive design are discussed, which promote early ethical reflection and continuous user feedback.Practical examples such as KIPA (AI-supported patient information), KIDELIR (delirium prevention in care), and PRIVETDIS (neurotechnologies and mental privacy) show how participatory research can contribute to the optimization of care concepts. In addition to opportunities such as increased acceptance and user-centered technology design, challenges are identified, including limited resources, lack of representativeness, and invisible additional burdens for those involved. Finally, it is emphasized that in addition to technical and regulatory measures, continuous ethical reflection and transparent communication are essential to implement trustworthy and effective AI systems in healthcare.

摘要

人工智能(AI)在医疗保健领域的日益融入,不仅有望提高效率、实现个性化医疗并做出基于证据的决策,还引发了伦理和社会挑战,如偏见、缺乏透明度以及接受度问题。让患者、医生、护理人员和其他利益相关者积极参与开发过程的参与式方法,能够使技术创新与实际需求相匹配,并以社会公正的方式进行设计。该分析区分了作为积极共同设计的参与和作为获取社会资源的参与。诸如“参与阶梯”(阿恩斯坦)等理论模型阐释了不同的参与层次。此外,还讨论了行动研究、基于社区的参与式研究、设计伦理和价值敏感设计等方法,这些方法促进早期伦理反思和持续的用户反馈。KIPA(人工智能支持的患者信息)、KIDELIR(护理中预防谵妄)和PRIVETDIS(神经技术与精神隐私)等实际例子展示了参与式研究如何有助于优化护理理念。除了提高接受度和以用户为中心的技术设计等机遇外,还识别出了挑战,包括资源有限、缺乏代表性以及给相关人员带来无形的额外负担。最后强调,除了技术和监管措施外,持续的伦理反思和透明沟通对于在医疗保健领域实施值得信赖且有效的人工智能系统至关重要。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验