Li Zexiang, Du Wei, Shi Yongtao, Li Wei, Gao Chao
College of Electrical Engineering and New Energy, China Three Gorges University, Yichang, Hubei, 443002, China.
College of Computer and Information Technology, China Three Gorges University, Yichang, Hubei, 443002, China.
Sci Rep. 2024 May 22;14(1):11701. doi: 10.1038/s41598-024-61238-5.
Due to the lack of sufficient labeled data for the prostate and the extensive and complex semantic information in ultrasound images, accurately and quickly segmenting the prostate in transrectal ultrasound (TRUS) images remains a challenging task. In this context, this paper proposes a solution for TRUS image segmentation using an end-to-end bidirectional semantic constraint method, namely the BiSeC model. The experimental results show that compared with classic or popular deep learning methods, this method has better segmentation performance, with the Dice Similarity Coefficient (DSC) of 96.74% and the Intersection over Union (IoU) of 93.71%. Our model achieves a good balance between actual boundaries and noise areas, reducing costs while ensuring the accuracy and speed of segmentation.
由于前列腺缺乏足够的标注数据,且超声图像中的语义信息广泛而复杂,在经直肠超声(TRUS)图像中准确、快速地分割前列腺仍然是一项具有挑战性的任务。在此背景下,本文提出了一种使用端到端双向语义约束方法的TRUS图像分割解决方案,即BiSeC模型。实验结果表明,与经典或流行的深度学习方法相比,该方法具有更好的分割性能,骰子相似系数(DSC)为96.74%,交并比(IoU)为93.71%。我们的模型在实际边界和噪声区域之间实现了良好的平衡,在确保分割准确性和速度的同时降低了成本。