Yang Xiongwen, Xiao Yi, Liu Di, Zhang Yun, Deng Huiyin, Huang Jian, Shi Huiyou, Liu Dan, Liang Maoli, Jin Xing, Sun Yongpan, Yao Jing, Zhou XiaoJiang, Guo Wankai, He Yang, Tang WeiJuan, Xu Chuan
Department of Thoracic Surgery, Guizhou Provincial People's Hospital, No. 83, Zhongshan East Road, Guiyang, Guizhou, 550000, China.
NHC Key Laboratory of Pulmonary Immunological Diseases, Guizhou Provincial People's Hospital, Guiyang, Guizhou, 550000, China.
BMC Med Inform Decis Mak. 2025 Jan 23;25(1):36. doi: 10.1186/s12911-024-02838-z.
Large language models (LLMs) are increasingly utilized in healthcare settings. Postoperative pathology reports, which are essential for diagnosing and determining treatment strategies for surgical patients, frequently include complex data that can be challenging for patients to comprehend. This complexity can adversely affect the quality of communication between doctors and patients about their diagnosis and treatment options, potentially impacting patient outcomes such as understanding of their condition, treatment adherence, and overall satisfaction.
This study analyzed text pathology reports from four hospitals between October and December 2023, focusing on malignant tumors. Using GPT-4, we developed templates for interpretive pathology reports (IPRs) to simplify medical terminology for non-professionals. We randomly selected 70 reports to generate these templates and evaluated the remaining 628 reports for consistency and readability. Patient understanding was measured using a custom-designed pathology report understanding level assessment scale, scored by volunteers with no medical background. The study also recorded doctor-patient communication time and patient comprehension levels before and after using IPRs.
Among 698 pathology reports analyzed, the interpretation through LLMs significantly improved readability and patient understanding. The average communication time between doctors and patients decreased by over 70%, from 35 to 10 min (P < 0.001), with the use of IPRs. The study also found that patients scored higher on understanding levels when provided with AI-generated reports, from 5.23 points to 7.98 points (P < 0.001), with the use of IPRs. indicating an effective translation of complex medical information. Consistency between original pathology reports (OPRs) and IPRs was also evaluated, with results showing high levels of consistency across all assessed dimensions, achieving an average score of 4.95 out of 5.
This research demonstrates the efficacy of LLMs like GPT-4 in enhancing doctor-patient communication by translating pathology reports into more accessible language. While this study did not directly measure patient outcomes or satisfaction, it provides evidence that improved understanding and reduced communication time may positively influence patient engagement. These findings highlight the potential of AI to bridge gaps between medical professionals and the public in healthcare environments.
大语言模型(LLMs)在医疗环境中的应用越来越广泛。术后病理报告对于外科患者的诊断和治疗策略的确定至关重要,其中常常包含复杂的数据,患者理解起来可能具有挑战性。这种复杂性可能会对医生与患者之间关于诊断和治疗选择的沟通质量产生不利影响,进而可能影响患者的治疗效果,如对自身病情的理解、治疗依从性以及总体满意度。
本研究分析了2023年10月至12月期间四家医院的文本病理报告,重点关注恶性肿瘤。我们使用GPT-4开发了解读性病理报告(IPRs)模板,以简化非专业人士的医学术语。我们随机选择70份报告来生成这些模板,并对其余628份报告进行一致性和可读性评估。使用定制设计的病理报告理解水平评估量表来衡量患者的理解程度,由没有医学背景的志愿者进行评分。该研究还记录了使用IPRs前后的医患沟通时间和患者理解水平。
在分析的698份病理报告中,通过大语言模型进行的解读显著提高了可读性和患者理解度。使用IPRs后,医生与患者之间的平均沟通时间减少了70%以上,从35分钟降至10分钟(P < 0.001)。研究还发现,使用IPRs后,患者在理解水平上的得分更高,从5.23分提高到7.98分(P < 0.001),表明复杂医学信息得到了有效转化。还评估了原始病理报告(OPRs)与IPRs之间的一致性,结果显示在所有评估维度上一致性水平都很高,平均得分为4.95分(满分5分)。
本研究证明了GPT-4等大语言模型通过将病理报告翻译成更易懂的语言来增强医患沟通的有效性。虽然本研究没有直接测量患者的治疗效果或满意度,但它提供了证据表明,理解的改善和沟通时间的减少可能会对患者的参与度产生积极影响。这些发现凸显了人工智能在医疗环境中弥合医学专业人员与公众之间差距的潜力。