Palmarin Elena, Lando Stefania, Marchet Alberto, Saibene Tania, Michieletto Silvia, Cagol Matteo, Milardi Francesco, Gregori Dario, Lorenzoni Giulia
Unit of Biostatistics, Epidemiology and Public Health, Department of Cardiac, Thoracic, Vascular Sciences and Public Health, University of Padova, 35131 Padova, Italy.
Breast Surgery Unit, Veneto Institute of Oncology IOV, IRCCS, 35128 Padova, Italy.
J Clin Med. 2025 Aug 1;14(15):5411. doi: 10.3390/jcm14155411.
Accurate and accessible perioperative health information empowers patients and enhances recovery outcomes. Artificial intelligence tools, such as ChatGPT, have garnered attention for their potential in health communication. This study evaluates the accuracy and readability of responses generated by ChatGPT to questions commonly asked about breast cancer. Fifteen simulated patient queries about breast cancer surgery preparation and recovery were prepared. Responses generated by ChatGPT (4o version) were evaluated for accuracy by a pool of breast surgeons using a 4-point Likert scale. Readability was assessed with the Flesch-Kincaid Grade Level (FKGL). Descriptive statistics were used to summarize the findings. Of the 15 responses evaluated, 11 were rated as "accurate and comprehensive", while 4 out of 15 were deemed "correct but incomplete". No responses were classified as "partially incorrect" or "completely incorrect". The median FKGL score was 11.2, indicating a high school reading level. While most responses were technically accurate, the complexity of language exceeded the recommended readability levels for patient-directed materials. The model shows potential as a complementary resource for patient education in breast cancer surgery, but should not replace direct interaction with healthcare providers. Future research should focus on enhancing language models' ability to generate accessible and patient-friendly content.
准确且易于获取的围手术期健康信息能赋予患者力量并改善康复效果。诸如ChatGPT之类的人工智能工具因其在健康交流方面的潜力而受到关注。本研究评估了ChatGPT对常见乳腺癌问题所生成回答的准确性和可读性。准备了15个关于乳腺癌手术准备和康复的模拟患者问题。由一组乳腺外科医生使用4点李克特量表对ChatGPT(4.0版本)生成的回答进行准确性评估。使用弗莱什-金凯德年级水平(FKGL)评估可读性。采用描述性统计来总结研究结果。在评估的15个回答中,11个被评为“准确且全面”,而15个中有4个被认为“正确但不完整”。没有回答被归类为“部分错误”或“完全错误”。FKGL得分中位数为11.2,表明为高中阅读水平。虽然大多数回答在技术上是准确的,但语言的复杂性超过了针对患者的材料所建议的可读性水平。该模型显示出作为乳腺癌手术患者教育补充资源的潜力,但不应取代与医疗服务提供者的直接互动。未来的研究应侧重于提高语言模型生成易懂且对患者友好内容的能力。
Clin Orthop Relat Res. 2023-11-1
J Clin Med. 2024-11-27
J Plast Reconstr Aesthet Surg. 2024-12
Ann Surg Oncol. 2025-2
BMC Womens Health. 2024-9-2