Li Mingjie, Liu Rui, Wang Fuyu, Chang Xiaojun, Liang Xiaodan
University of Technology Sydney, Sydney, Australia.
Monash University, Melbourne, Australia.
World Wide Web. 2023;26(1):253-270. doi: 10.1007/s11280-022-01013-6. Epub 2022 Aug 27.
Medical reports have significant clinical value to radiologists and specialists, especially during a pandemic like COVID. However, beyond the common difficulties faced in the natural image captioning, medical report generation specifically requires the model to describe a medical image with a fine-grained and semantic-coherence paragraph that should satisfy both medical commonsense and logic. Previous works generally extract the global image features and attempt to generate a paragraph that is similar to referenced reports; however, this approach has two limitations. Firstly, the regions of primary interest to radiologists are usually located in a small area of the global image, meaning that the remainder parts of the image could be considered as irrelevant noise in the training procedure. Secondly, there are many similar sentences used in each medical report to describe the normal regions of the image, which causes serious data bias. This deviation is likely to teach models to generate these inessential sentences on a regular basis. To address these problems, we propose an Auxiliary Signal-Guided Knowledge Encoder-Decoder (ASGK) to mimic radiologists' working patterns. Specifically, the auxiliary patches are explored to expand the widely used visual patch features before fed to the Transformer encoder, while the external linguistic signals help the decoder better master prior knowledge during the pre-training process. Our approach performs well on common benchmarks, including CX-CHR, IU X-Ray, and COVID-19 CT Report dataset (COV-CTR), demonstrating combining auxiliary signals with transformer architecture can bring a significant improvement in terms of medical report generation. The experimental results confirm that auxiliary signals driven Transformer-based models are with solid capabilities to outperform previous approaches on both medical terminology classification and paragraph generation metrics.
医学报告对放射科医生和专家具有重要的临床价值,尤其是在像新冠疫情这样的大流行期间。然而,除了自然图像字幕中面临的常见困难之外,医学报告生成特别要求模型用一个细粒度且语义连贯的段落来描述医学图像,该段落应同时满足医学常识和逻辑。先前的工作通常提取全局图像特征,并试图生成一个与参考报告相似的段落;然而,这种方法有两个局限性。首先,放射科医生主要关注的区域通常位于全局图像的一小部分,这意味着图像的其余部分在训练过程中可能被视为无关噪声。其次,每份医学报告中都有许多相似的句子用于描述图像的正常区域,这会导致严重的数据偏差。这种偏差很可能会使模型经常生成这些无关紧要的句子。为了解决这些问题,我们提出了一种辅助信号引导的知识编码器 - 解码器(ASGK)来模仿放射科医生的工作模式。具体来说,在将广泛使用的视觉补丁特征输入到Transformer编码器之前,探索辅助补丁以扩展这些特征,而外部语言信号有助于解码器在预训练过程中更好地掌握先验知识。我们的方法在包括CX-CHR、IU X射线和COVID-19 CT报告数据集(COV-CTR)等常见基准上表现良好,表明将辅助信号与Transformer架构相结合可以在医学报告生成方面带来显著改进。实验结果证实,基于辅助信号驱动的Transformer模型在医学术语分类和段落生成指标方面都具有强大的能力,优于先前的方法。