Chen Wen-Fan, Ou Hsin-You, Pan Cheng-Tang, Liao Chien-Chang, Huang Wen, Lin Han-Yu, Cheng Yu-Fan, Wei Chia-Po
Institute of Medical Science and Technology, National Sun Yat-sen University, Kaohsiung 80424, Taiwan.
Liver Transplantation Program and Departments of Diagnostic Radiology, Surgery Kaohsiung Chang Gung Memorial Hospital, Chang Gung University College of Medicine, Kaohsiung 833401, Taiwan.
Diagnostics (Basel). 2021 Sep 2;11(9):1599. doi: 10.3390/diagnostics11091599.
Due to the fact that previous studies have rarely investigated the recognition rate discrepancy and pathology data error when applied to different databases, the purpose of this study is to investigate the improvement of recognition rate via deep learning-based liver lesion segmentation with the incorporation of hospital data. The recognition model used in this study is H-DenseUNet, which is applied to the segmentation of the liver and lesions, and a mixture of 2D/3D Hybrid-DenseUNet is used to reduce the recognition time and system memory requirements. Differences in recognition results were determined by comparing the training files of the standard LiTS competition data set with the training set after mixing in an additional 30 patients. The average error value of 9.6% was obtained by comparing the data discrepancy between the actual pathology data and the pathology data after the analysis of the identified images imported from Kaohsiung Chang Gung Memorial Hospital. The average error rate of the recognition output after mixing the LiTS database with hospital data for training was 1%. In the recognition part, the Dice coefficient was 0.52 after training 50 epochs using the standard LiTS database, while the Dice coefficient was increased to 0.61 after adding 30 hospital data to the training. After importing 3D Slice and ITK-Snap software, a 3D image of the lesion and liver segmentation can be developed. It is hoped that this method could be used to stimulate more research in addition to the general public standard database in the future, as well as to study the applicability of hospital data and improve the generality of the database.
由于以往的研究很少探讨应用于不同数据库时的识别率差异和病理数据误差,本研究的目的是通过结合医院数据的基于深度学习的肝脏病变分割来研究识别率的提高。本研究中使用的识别模型是H-DenseUNet,它被应用于肝脏和病变的分割,并且使用2D/3D混合密集型U-Net的混合体来减少识别时间和系统内存需求。通过将标准LiTS竞赛数据集的训练文件与额外加入30名患者后的训练集进行比较,确定识别结果的差异。通过比较高雄长庚纪念医院导入的已识别图像分析后的实际病理数据与病理数据之间的数据差异,获得了9.6%的平均误差值。将LiTS数据库与医院数据混合用于训练后的识别输出平均错误率为1%。在识别部分,使用标准LiTS数据库训练50个轮次后,Dice系数为0.52,而在训练中加入30份医院数据后,Dice系数提高到了0.61。导入3D切片和ITK-Snap软件后,可以生成病变和肝脏分割的3D图像。希望这种方法除了未来的通用公共标准数据库外,还能用于激发更多的研究,以及研究医院数据的适用性并提高数据库的通用性。