Zhu Qingqing, Chen Xiuying, Jin Qiao, Hou Benjamin, Mathai Tejas Sudharshan, Mukherjee Pritam, Gao Xin, Summers Ronald M, Lu Zhiyong
National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, MD, USA.
Bioscience Reseach Center, King Abdullah University of Science & Technology, Saudi Arabia.
Proc (IEEE Int Conf Healthc Inform). 2024 Jun;2024:402-411. doi: 10.1109/ichi61247.2024.00058. Epub 2024 Aug 22.
In radiology, Artificial Intelligence (AI) has significantly advanced report generation, but automatic evaluation of these AI-produced reports remains challenging. Current metrics, such as Conventional Natural Language Generation (NLG) and Clinical Efficacy (CE), often fall short in capturing the semantic intricacies of clinical contexts or overemphasize clinical details, undermining report clarity. To overcome these issues, our proposed method synergizes the expertise of professional radiologists with Large Language Models (LLMs), like GPT-3.5 and GPT-4. Utilizing In-Context Instruction Learning (ICIL) and Chain of Thought (CoT) reasoning, our approach aligns LLM evaluations with radiologist standards, enabling detailed comparisons between human and AI-generated reports. This is further enhanced by a Regression model that aggregates sentence evaluation scores. Experimental results show that our "Detailed GPT-4 (5-shot)" model achieves a correlation that is 0.48, outperforming the METEOR metric by 0.19, while our "Regressed GPT-4" model shows even greater alignment(0.64) with expert evaluations, exceeding the best existing metric by a 0.35 margin. Moreover, the robustness of our explanations has been validated through a thorough iterative strategy. We plan to publicly release annotations from radiology experts, setting a new standard for accuracy in future assessments. This underscores the potential of our approach in enhancing the quality assessment of AI-driven medical reports.
在放射学领域,人工智能(AI)已在报告生成方面取得了显著进展,但对这些由人工智能生成的报告进行自动评估仍具有挑战性。当前的指标,如传统自然语言生成(NLG)和临床疗效(CE),在捕捉临床背景的语义复杂性方面往往存在不足,或者过度强调临床细节,从而影响了报告的清晰度。为克服这些问题,我们提出的方法将专业放射科医生的专业知识与大语言模型(LLM)(如GPT-3.5和GPT-4)相结合。利用上下文指令学习(ICIL)和思维链(CoT)推理,我们的方法使大语言模型的评估符合放射科医生的标准,从而能够对人类生成的报告和人工智能生成的报告进行详细比较。通过一个汇总句子评估分数的回归模型,这一点得到了进一步加强。实验结果表明,我们的“详细GPT-4(5次示例)”模型实现了0.48的相关性,比METEOR指标高出0.19,而我们的“回归GPT-4”模型与专家评估的一致性更高(0.64),比现有最佳指标高出0.35。此外,我们解释的稳健性已通过全面的迭代策略得到验证。我们计划公开发布放射科专家的注释,为未来评估的准确性设定新标准。这凸显了我们的方法在提高人工智能驱动的医学报告质量评估方面的潜力。