Suppr超能文献

在养老院中驾驭人工智能:利益相关者对信任和关怀逻辑的不同看法。

Navigating artificial intelligence in care homes: Competing stakeholder views of trust and logics of care.

机构信息

Sydney Centre for Healthy Societies, Sociology, School of Social and Political Sciences, The University of Sydney, Social Sciences Building (A02), NSW, 2006, Australia.

Sociology, School of Social Sciences, Faculty of Arts, Monash University, Menzies Building, Clayton, VIC, 3800, Australia.

出版信息

Soc Sci Med. 2024 Oct;358:117187. doi: 10.1016/j.socscimed.2024.117187. Epub 2024 Aug 5.

Abstract

The COVID-19 pandemic shed light on systemic issues plaguing care (nursing) homes, from staff shortages to substandard healthcare. Artificial Intelligence (AI) technologies, including robots and chatbots, have been proposed as solutions to such issues. Yet, socio-ethical concerns about the implications of AI for health and care practices have also been growing among researchers and practitioners. At a time of AI promise and concern, it is critical to understand how those who develop and implement these technologies perceive their use and impact in care homes. Combining a sociological approach to trust with Annemarie Mol's logic of care and Jeanette Pol's concept of fitting, we draw on 18 semi-structured interviews with care staff, advocates, and AI developers to explore notions of human-AI care. Our findings show positive perceptions and experiences of AI in care homes, but also ambivalence. While integrative care incorporating humans and technology was salient across interviewees, we also identified experiential, contextual, and knowledge divides between AI developers and care staff. For example, developers lacked experiential knowledge of care homes' daily functioning and constraints, influencing how they designed AI. Care staff demonstrated limited experiential knowledge of AI or more critical views about contexts of use, affecting their trust in these technologies. Different understandings of 'good care' were evident, too: 'warm' care was sometimes linked to human care and 'cold' care to technology. In conclusion, understandings and experiences of AI are marked by different logics of sociotechnical care and related levels of trust in these sensitive settings.

摘要

新冠疫情凸显了护理院存在的系统性问题,包括员工短缺和医疗服务水平低等。人工智能(AI)技术,包括机器人和聊天机器人,已被提议作为解决这些问题的方案。然而,研究人员和从业者越来越关注 AI 对医疗保健实践的影响所带来的社会伦理问题。在 AI 充满希望和令人担忧的时代,了解开发和实施这些技术的人如何看待它们在护理院中的使用和影响至关重要。我们结合了信任的社会学方法、安内玛丽·莫尔(Annemarie Mol)的关怀逻辑和珍妮特·波尔(Jeanette Pol)的契合概念,通过对 18 名护理人员、倡导者和 AI 开发者的半结构化访谈,探讨了人机护理的概念。我们的研究结果表明,护理院对 AI 的看法和体验是积极的,但也存在矛盾。虽然将人类和技术整合在一起的综合护理在受访者中很突出,但我们也发现了 AI 开发者和护理人员之间存在体验、背景和知识方面的差距。例如,开发者缺乏对护理院日常运作和限制的体验性知识,这影响了他们设计 AI 的方式。护理人员对 AI 的体验性知识有限,或者对使用场景持更批评的态度,这影响了他们对这些技术的信任。不同的“好护理”理解也很明显:“温暖”的护理有时与人类护理相关,而“冷漠”的护理则与技术相关。总之,在这些敏感环境中,对 AI 的理解和体验受到不同的社会技术关怀逻辑和对这些技术的信任程度的影响。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验