Suppr超能文献

人工智能助力聊天机器人热潮:一项评估临床麻醉学辅助作用的单中心观察性研究

Artificial intelligence enhanced Chatbot boom: A single center observational study to evaluate assistance in clinical anesthesiology.

作者信息

Jois Sowmya M, Rangalakshmi Srinivasan, Iyengar Sowmya Madihalli Janardhan, Mahesh Chethana, Devi Lairenjam Deepa, Namachivayam Arun Kumar

机构信息

Department of Anaesthesiology, RajaRajeswari Medical College, Bangalore, Affiliated to MGR University, Bengaluru, Karnataka, India.

Department of Biostatistics, Bapuji Dental College and Hospital, Davangere, Karnataka, India.

出版信息

J Anaesthesiol Clin Pharmacol. 2025 Apr-Jun;41(2):351-356. doi: 10.4103/joacp.joacp_151_24. Epub 2025 Mar 24.

Abstract

BACKGROUND AND AIMS

The field of anaesthesiology and perioperative medicine has explored advancements in science and technology, ensuring precision and personalized anesthesia plans. The surge in the usage of chat-generative pretrained transformer (Chat GPT) in medicine has evoked interest among anesthesiologists to assess its performance in the operating room. However, there is concern about accuracy, patient privacy and ethics. Our objective in this study assess whether Chat GPT can provide assistance in clinical decisions and compare them with those of resident anesthesiologists.

MATERIAL AND METHODS

In this cross-sectional study conducted at a teaching hospital, a set of 30 hypothetical clinical scenarios in the operating room were presented to resident anesthesiologists and Chat-GPT 4. The first five scenarios out of 30 were typed with three additional prompts in the same chat to determine if there was any detailing of answers. The responses were labeled and assessed by three reviewers not involved in the study.

RESULTS

The interclass coefficient (ICC) values show variation in the level of agreement between Chat GPT and anesthesiologists. For instance, the ICC of 0.41 between A1 and Chat GPT indicates a moderate level of agreement, whereas the ICC of 0.06 between A2 and Chat GPT suggests a comparatively weaker level of agreement.

CONCLUSIONS

In this study, it was found that there were variations in the level of agreement between Chat GPT and resident anesthesiologists' response in terms of accuracy and comprehensiveness of responses in solving intraoperative scenarios. The use of prompts improved the agreement of Chat GPT with anesthesiologists.

摘要

背景与目的

麻醉学与围手术期医学领域一直在探索科学技术的进步,以确保精准且个性化的麻醉方案。医学领域中聊天生成预训练变换器(Chat GPT)使用量的激增引发了麻醉医生对评估其在手术室表现的兴趣。然而,人们对其准确性、患者隐私和伦理问题存在担忧。我们这项研究的目的是评估Chat GPT是否能在临床决策中提供帮助,并将其与住院麻醉医生的决策进行比较。

材料与方法

在一家教学医院进行的这项横断面研究中,向住院麻醉医生和Chat-GPT 4呈现了一组30个手术室中的假设临床场景。在同一个聊天中,对30个场景中的前5个输入了另外三个提示,以确定答案是否有详细阐述。由三名未参与该研究的评审员对回答进行标注和评估。

结果

组内相关系数(ICC)值显示了Chat GPT与麻醉医生之间的一致程度存在差异。例如,A1与Chat GPT之间的ICC为0.41,表明一致程度中等,而A2与Chat GPT之间的ICC为0.06,表明一致程度相对较弱。

结论

在本研究中,发现Chat GPT与住院麻醉医生在解决术中场景问题的回答准确性和全面性方面的一致程度存在差异。使用提示提高了Chat GPT与麻醉医生的一致性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3e60/12002681/959726d44133/JOACP-41-351-g001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验