Suppr超能文献

评估ChatGPT提示策略在加强甲状腺癌患者教育中的效果:一项前瞻性研究。

Assessing the Efficacy of ChatGPT Prompting Strategies in Enhancing Thyroid Cancer Patient Education: A Prospective Study.

作者信息

Xu Qi, Wang Jing, Chen Xiaohui, Wang Jiale, Li Hanzhi, Wang Zheng, Li Weihan, Gao Jinliang, Chen Chen, Gao Yuwan

机构信息

Department of Breast and Thyroid Surgery, The First Affiliated Hospital of Nanyang Medical College, Nanyang, China.

Department of Breast Surgery, Harbin Medical University Cancer Hospital, Harbin, China.

出版信息

J Med Syst. 2025 Jan 17;49(1):11. doi: 10.1007/s10916-024-02129-0.

Abstract

With the rise of AI platforms, patients increasingly use them for information, relying on advanced language models like ChatGPT for answers and advice. However, the effectiveness of ChatGPT in educating thyroid cancer patients remains unclear. We designed 50 questions covering key areas of thyroid cancer management and generated corresponding responses under four different prompt strategies. These answers were evaluated based on four dimensions: accuracy, comprehensiveness, human care, and satisfaction. Additionally, the readability of the responses was assessed using the Flesch-Kincaid grade level, Gunning Fog Index, Simple Measure of Gobbledygook, and Fry readability score. We also statistically analyzed the references in the responses generated by ChatGPT. The type of prompt significantly influences the quality of ChatGPT's responses. Notably, the "statistics and references" prompt yields the highest quality outcomes. Prompts tailored to a "6th-grade level" generated the most easily understandable text, whereas responses without specific prompts were the most complex. Additionally, the "statistics and references" prompt produced the longest responses while the "6th-grade level" prompt resulted in the shortest. Notably, 87.84% of citations referenced published medical literature, but 12.82% contained misinformation or errors. ChatGPT demonstrates considerable potential for enhancing the readability and quality of thyroid cancer patient education materials. By adjusting prompt strategies, ChatGPT can generate responses that cater to diverse patient needs, improving their understanding and management of the disease. However, AI-generated content must be carefully supervised to ensure that the information it provides is accurate.

摘要

随着人工智能平台的兴起,患者越来越多地将其用于获取信息,依靠ChatGPT等先进的语言模型来寻求答案和建议。然而,ChatGPT在甲状腺癌患者教育方面的有效性仍不明确。我们设计了50个涵盖甲状腺癌管理关键领域的问题,并在四种不同的提示策略下生成了相应的回答。这些回答从准确性、全面性、人文关怀和满意度四个维度进行评估。此外,还使用弗莱什-金凯德年级水平、冈宁雾度指数、复杂难懂性简易度量和弗莱可读性分数来评估回答的可读性。我们还对ChatGPT生成的回答中的参考文献进行了统计分析。提示类型对ChatGPT回答的质量有显著影响。值得注意的是,“统计和参考文献”提示产生的结果质量最高。针对“六年级水平”定制的提示生成的文本最容易理解,而没有特定提示的回答则最复杂。此外,“统计和参考文献”提示产生的回答最长,而“六年级水平”提示产生的回答最短。值得注意的是,87.84%的引用参考文献来自已发表的医学文献,但12.82%包含错误信息或错误。ChatGPT在提高甲状腺癌患者教育材料的可读性和质量方面具有相当大的潜力。通过调整提示策略,ChatGPT可以生成满足不同患者需求的回答,提高他们对疾病的理解和管理能力。然而,必须仔细监督人工智能生成的内容,以确保其提供的信息准确无误。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验