Liu Xiaofeng, Xing Fangxu, Yang Chao, Jay Kuo C-C, El Fakhri Georges, Woo Jonghye
Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, 02114, USA.
Facebook Artificial Intelligence, Boston, MA, 02142.
Brainlesion. 2021;12658:80-91. doi: 10.1007/978-3-030-72084-1_8. Epub 2021 Mar 27.
Deformable registration of magnetic resonance images between patients with brain tumors and healthy subjects has been an important tool to specify tumor geometry through location alignment and facilitate pathological analysis. Since tumor region does not match with any ordinary brain tissue, it has been difficult to deformably register a patient's brain to a normal one. Many patient images are associated with irregularly distributed lesions, resulting in further distortion of normal tissue structures and complicating registration's similarity measure. In this work, we follow a multi-step context-aware image inpainting framework to generate synthetic tissue intensities in the tumor region. The coarse image-to-image translation is applied to make a rough inference of the missing parts. Then, a feature-level patch-match refinement module is applied to refine the details by modeling the semantic relevance between patch-wise features. A symmetry constraint reflecting a large degree of anatomical symmetry in the brain is further proposed to achieve better structure understanding. Deformable registration is applied between inpainted patient images and normal brains, and the resulting deformation field is eventually used to deform original patient data for the final alignment. The method was applied to the Multimodal Brain Tumor Segmentation (BraTS) 2018 challenge database and compared against three existing inpainting methods. The proposed method yielded results with increased peak signal-to-noise ratio, structural similarity index, inception score, and reduced L1 error, leading to successful patient-to-normal brain image registration.
脑肿瘤患者与健康受试者之间磁共振图像的可变形配准,一直是通过位置对齐来确定肿瘤几何形状并促进病理分析的重要工具。由于肿瘤区域与任何普通脑组织都不匹配,因此很难将患者的大脑与正常大脑进行可变形配准。许多患者图像都伴有分布不规则的病变,导致正常组织结构进一步扭曲,使配准的相似性度量变得复杂。在这项工作中,我们遵循多步骤上下文感知图像修复框架,以在肿瘤区域生成合成组织强度。应用粗略的图像到图像转换来对缺失部分进行粗略推断。然后,应用特征级补丁匹配细化模块,通过对逐补丁特征之间的语义相关性进行建模来细化细节。进一步提出了反映大脑中高度解剖对称性的对称约束,以实现更好的结构理解。在修复后的患者图像和正常大脑之间应用可变形配准,最终将得到的变形场用于对原始患者数据进行变形,以实现最终对齐。该方法应用于多模态脑肿瘤分割(BraTS)2018挑战数据库,并与三种现有的图像修复方法进行了比较。所提出的方法产生的结果具有更高的峰值信噪比、结构相似性指数、初始得分,并降低了L1误差,从而成功实现了患者到正常脑图像的配准。