临床实践中的人工智能:一项关于小儿外科住院医师观点的横断面调查。

Artificial intelligence in clinical practice: a cross-sectional survey of paediatric surgery residents' perspectives.

作者信息

Gigola Francesca, Amato Tommaso, Del Riccio Marco, Raffaele Alessandro, Morabito Antonino, Coletta Riccardo

机构信息

School of Pediatric Surgery, University of Florence, Florence, Italy

School of Pediatric Surgery, University of Florence, Florence, Italy.

出版信息

BMJ Health Care Inform. 2025 May 21;32(1):e101456. doi: 10.1136/bmjhci-2025-101456.

Abstract

OBJECTIVES

The aim of this study was to compare the performances of residents and ChatGPT in answering validated questions and assess paediatric surgery residents' acceptance, perceptions and readiness to integrate artificial intelligence (AI) into clinical practice.

METHODS

We conducted a cross-sectional study using randomly selected questions and clinical cases on paediatric surgery topics. We examined residents' acceptance of AI before and after comparing their results to ChatGPT's results using the Unified Theory of Acceptance and Use of Technology 2 (UTAUT2) model. Data analysis was performed using Jamovi V.2.4.12.0.

RESULTS

30 residents participated. ChatGPT-4.0's median score was 13.75, while ChatGPT-3.5's was 8.75. The median score among residents was 8.13. Differences appeared statistically significant. ChatGPT outperformed residents specifically in definition questions (ChatGPT-4.0 vs residents, p<0.0001; ChatGPT-3.5 vs residents, p=0.03). In the UTAUT2 Questionnaire, respondents expressed a more positive evaluation of ChatGPT with higher mean values for each construct and lower fear of technology after learning about test scores.

DISCUSSION

ChatGPT performed better than residents in knowledge-based questions and simple clinical cases. The accuracy of ChatGPT declined when confronted with more complex questions. The UTAUT questionnaire results showed that learning about the potential of ChatGPT could lead to a shift in perception, resulting in a more positive attitude towards AI.

CONCLUSION

Our study reveals residents' positive receptivity towards AI, especially after being confronted with its efficacy. These results highlight the importance of integrating AI-related topics into medical curricula and residency to help future physicians and surgeons better understand the advantages and limitations of AI.

摘要

目的

本研究旨在比较住院医师和ChatGPT回答经过验证的问题的表现,并评估小儿外科住院医师对将人工智能(AI)整合到临床实践中的接受度、看法和准备情况。

方法

我们使用随机选择的小儿外科主题问题和临床病例进行了一项横断面研究。我们使用技术接受与使用统一理论2(UTAUT2)模型,在将住院医师的结果与ChatGPT的结果进行比较之前和之后,检查他们对AI的接受度。使用Jamovi V.2.4.12.0进行数据分析。

结果

30名住院医师参与。ChatGPT-4.0的中位数分数为13.75,而ChatGPT-3.5的为8.75。住院医师的中位数分数为8.13。差异具有统计学意义。ChatGPT在定义问题上的表现优于住院医师(ChatGPT-4.0与住院医师相比,p<0.0001;ChatGPT-3.5与住院医师相比,p=0.03)。在UTAUT2问卷中,受访者在了解测试分数后,对ChatGPT的评价更为积极,每个结构的平均值更高,对技术的恐惧更低。

讨论

ChatGPT在基于知识的问题和简单临床病例中表现优于住院医师。当面对更复杂的问题时,ChatGPT的准确性下降。UTAUT问卷结果表明,了解ChatGPT的潜力可能会导致看法的转变,从而对AI产生更积极的态度。

结论

我们的研究揭示了住院医师对AI的积极接受度,尤其是在面对其功效之后。这些结果强调了将AI相关主题纳入医学课程和住院医师培训的重要性,以帮助未来的医生和外科医生更好地理解AI的优势和局限性。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索