Suppr超能文献

用于肝细胞癌分割的新型残差密集注意力(RDA)U-Net网络架构的开发

Development of Novel Residual-Dense-Attention (RDA) U-Net Network Architecture for Hepatocellular Carcinoma Segmentation.

作者信息

Chen Wen-Fan, Ou Hsin-You, Lin Han-Yu, Wei Chia-Po, Liao Chien-Chang, Cheng Yu-Fan, Pan Cheng-Tang

机构信息

Institute of Medical Science and Technology, National Sun Yat-sen University, Kaohsiung 80424, Taiwan.

Liver Transplantation Program and Department of Diagnostic Radiology, and Surgery Kaohsiung Chang Gung Memorial Hospital, and Chang Gung University College of Medicine, Kaohsiung 83301, Taiwan.

出版信息

Diagnostics (Basel). 2022 Aug 8;12(8):1916. doi: 10.3390/diagnostics12081916.

Abstract

The research was based on the image recognition technology of artificial intelligence, which is expected to assist physicians in making correct decisions through deep learning. The liver dataset used in this study was derived from the open source website (LiTS) and the data provided by the Kaohsiung Chang Gung Memorial Hospital. CT images were used for organ recognition and lesion segmentation; the proposed Residual-Dense-Attention (RDA) U-Net can achieve high accuracy without the use of contrast. In this study, U-Net neural network was used to combine ResBlock in ResNet with Dense Block in DenseNet in the coder part, allowing the training to maintain the parameters while reducing the overall recognition computation time. The decoder was equipped with Attention Gates to suppress the irrelevant areas of the image while focusing on the significant features. The RDA model was used to identify and segment liver organs and lesions from CT images of the abdominal cavity, and excellent segmentation was achieved for the liver located on the left side, right side, near the heart, and near the lower abdomen with other organs. Better recognition was also achieved for large, small, and single and multiple lesions. The study was able to reduce the overall computation time by about 28% compared to other convolutions, and the accuracy of liver and lesion segmentation reached 96% and 94.8%, with IoU values of 89.5% and 87%, and AVGDIST of 0.28 and 0.80, respectively.

摘要

该研究基于人工智能的图像识别技术,有望通过深度学习协助医生做出正确决策。本研究中使用的肝脏数据集来自开源网站(LiTS)以及高雄长庚纪念医院提供的数据。CT图像用于器官识别和病变分割;所提出的残差-密集-注意力(RDA)U-Net在不使用造影剂的情况下即可实现高精度。在本研究中,U-Net神经网络在编码器部分将ResNet中的ResBlock与DenseNet中的DenseBlock相结合,使训练在保持参数的同时减少整体识别计算时间。解码器配备了注意力门控,以抑制图像的无关区域,同时聚焦于重要特征。RDA模型用于从腹腔CT图像中识别和分割肝脏器官及病变,对于位于左侧、右侧、心脏附近和下腹部且伴有其他器官的肝脏均实现了出色的分割。对于大、小、单发和多发病变也取得了较好的识别效果。与其他卷积方法相比,该研究能够将整体计算时间减少约28%,肝脏和病变分割的准确率分别达到96%和94.8%,交并比(IoU)值分别为89.5%和87%,平均距离(AVGDIST)分别为0.28和0.80。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d421/9406579/acccce349523/diagnostics-12-01916-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验