Högberg Charlotte, Larsson Stefan, Lång Kristina
Department of Technology and Society, Faculty of Engineering, Lund University, Lund, Sweden.
Department of Translational Medicine, Diagnostic Radiology, Lund University, Lund, Sweden.
Digit Health. 2024 Oct 7;10:20552076241287958. doi: 10.1177/20552076241287958. eCollection 2024 Jan-Dec.
Lack of trust and transparency is stressed as a challenge for clinical implementation of artificial intelligence (AI). In breast cancer screening, AI-supported reading shows promising results but more research is needed on how medical experts, which are facing the integration of AI into their work, reason about trust and information needs. From a sociotechnical information practice perspective, we add to this knowledge by a Swedish case study. This study aims to: (1) clarify Swedish breast radiologists' views on trust, information and expertise pertaining to AI in mammography screening and (2) analytically address ideas about medical professionals' critical engagement with AI and motivations for trust in AI.
An online survey was distributed to Swedish breast radiologists. Survey responses were analysed by descriptive statistical method, correlation analysis and qualitative content analysis. The results were used as foundation for analysing trust and information as parts of critical engagements with AI.
Of the Swedish breast radiologists ( = 105), 47 answered the survey (response rate = 44.8%). 53.2% ( = 25) of the respondents would to a high/somewhat high degree trust AI assessments. To a great extent, additional information would support the respondents' trust evaluations. What type of critical engagement medical professionals are expected to perform on AI as decision support remains unclear.
There is a demand for enhanced information, explainability and transparency of AI-supported mammography. Further discussion and agreement are needed considering what the desired goals for trust in AI should be and how it relates to medical professionals' critical evaluation of AI-made claims in medical decision support.
缺乏信任和透明度被视为人工智能(AI)临床应用的一项挑战。在乳腺癌筛查中,人工智能辅助阅片显示出了有前景的结果,但对于面临将人工智能融入其工作的医学专家如何看待信任和信息需求,还需要开展更多研究。从社会技术信息实践的角度来看,我们通过一项瑞典的案例研究来丰富这方面的知识。本研究旨在:(1)阐明瑞典乳腺放射科医生对于乳腺钼靶筛查中人工智能在信任、信息和专业知识方面的看法,以及(2)分析性地探讨医学专业人员与人工智能进行批判性互动的观点以及信任人工智能的动机。
向瑞典乳腺放射科医生发放了一份在线调查问卷。通过描述性统计方法、相关性分析和定性内容分析对调查回复进行分析。研究结果被用作分析信任和信息的基础,将其作为与人工智能进行批判性互动的组成部分。
在105名瑞典乳腺放射科医生中,47人回复了调查问卷(回复率为44.8%)。53.2%(即25人)的受访者会在很大程度/某种程度上信任人工智能评估。在很大程度上,额外的信息会支持受访者的信任评估。医学专业人员作为决策支持对人工智能应进行何种类型的批判性互动仍不明确。
对于人工智能辅助乳腺钼靶检查,需要提高信息、可解释性和透明度。在人工智能信任的期望目标应该是什么以及它如何与医学专业人员在医疗决策支持中对人工智能生成的结论进行批判性评估相关联方面,需要进一步讨论并达成共识。