Suppr超能文献

人工智能对尾骨痛常见问题的回答:评估GPT-4o表现的准确性和一致性。

Artificial intelligence-generated responses to frequently asked questions on coccydynia: Evaluating the accuracy and consistency of GPT-4o's performance.

作者信息

Keles Aslinur, Illeez Ozge Gulsum, Erbagci Berkay, Giray Esra

机构信息

Department of Physical Medicine and Rehabilitation, Health Science University, Fatih Sultan Mehmet Training and Research Hospital, İstanbul, Türkiye.

出版信息

Arch Rheumatol. 2025 Mar 17;40(1):63-71. doi: 10.46497/ArchRheumatol.2025.10966. eCollection 2025 Mar.

Abstract

OBJECTIVES

This study aimed to assess whether GPT-4o's responses to patient-centered frequently asked questions about coccydynia are accurate and consistent when asked at different times and from different accounts.

MATERIALS AND METHODS

Questions were collected from medical websites, forums, and patient support groups and posed to GPT-4o. The responses were evaluated by two physiatrists for accuracy and consistency. Responses were categorized: correct and comprehensive, correct but not inadequate, partially correct and partially incorrect, and completely incorrect. Inconsistencies in scoring were resolved by an additional reviewer as needed. Statistical analysis, including Cohen's kappa for interreviewer reliability, was performed.

RESULTS

Of the 81 responses, 45.7% were rated as correct and comprehensive, while 49.4% were correct but incomplete. Only 4.9% of the responses contained partially incorrect information, and no responses were completely incorrect. The interreviewer agreement was substantial (kappa=0.67), but 75% of the responses differed between the two rounds. Notably, 34.9% of initially incomplete answers improved in the second round.

CONCLUSION

GPT-4o shows promise in providing accurate and generally reliable information about coccydynia. However, the variability observed in response consistency across repeated queries suggests that while the model is useful for patient education and general inquiries, it may not be suitable for providing specialized clinical knowledge without human oversight.

摘要

目的

本研究旨在评估当在不同时间从不同账户询问时,GPT-4o对以患者为中心的尾骨痛常见问题的回答是否准确和一致。

材料与方法

从医学网站、论坛和患者支持小组收集问题,并向GPT-4o提出。由两名物理治疗师对回答进行准确性和一致性评估。回答分为:正确且全面、正确但不充分、部分正确部分错误、完全错误。评分不一致时,根据需要由另一位审阅者解决。进行了统计分析,包括用于审阅者间信度的 Cohen's kappa分析。

结果

在81个回答中,45.7%被评为正确且全面,而49.4%正确但不完整。只有4.9%的回答包含部分错误信息,没有回答完全错误。审阅者间的一致性较高(kappa=0.67),但两轮回答中有75%不同。值得注意的是,34.9%最初不完整的回答在第二轮中有所改进。

结论

GPT-4o在提供有关尾骨痛的准确且总体可靠的信息方面显示出前景。然而,在重复查询中观察到的回答一致性差异表明,虽然该模型对患者教育和一般咨询有用,但在没有人工监督的情况下,它可能不适合提供专业的临床知识。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d36c/12010271/008302d2ef69/AR-2025-40-1-063-071-F1.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验