Yang Tao, Lu Xueqi, Yang Lanlan, Yang Miyang, Chen Jinghui, Zhao Hongjia
The First Clinical Medical College, The Affiliated People's Hospital of Fujian University of Traditional Chinese Medicine, Fuzhou, Fujian, China.
School of Biomedical Engineering, Southern Medical University, Guangzhou, China.
Front Neurosci. 2025 Jan 7;18:1510175. doi: 10.3389/fnins.2024.1510175. eCollection 2024.
To assist in the rapid clinical identification of brain tumor types while achieving segmentation detection, this study investigates the feasibility of applying the deep learning YOLOv5s algorithm model to the segmentation of brain tumor magnetic resonance images and optimizes and upgrades it on this basis.
The research institute utilized two public datasets of meningioma and glioma magnetic resonance imaging from Kaggle. Dataset 1 contains a total of 3,223 images, and Dataset 2 contains 216 images. From Dataset 1, we randomly selected 3,000 images and used the Labelimg tool to annotate the cancerous regions within the images. These images were then divided into training and validation sets in a 7:3 ratio. The remaining 223 images, along with Dataset 2, were ultimately used as the internal test set and external test set, respectively, to evaluate the model's segmentation effect. A series of optimizations were made to the original YOLOv5 algorithm, introducing the Atrous Spatial Pyramid Pooling (ASPP), Convolutional Block Attention Module (CBAM), Coordinate Attention (CA) for structural improvement, resulting in several optimized versions, namely YOLOv5s-ASPP, YOLOv5s-CBAM, YOLOv5s-CA, YOLOv5s-ASPP-CBAM, and YOLOv5s-ASPP-CA. The training and validation sets were input into the original YOLOv5s model, five optimized models, and the YOLOv8s model for 100 rounds of iterative training. The best weight file of the model with the best evaluation index in the six trained models was used for the final test of the test set.
After iterative training, the seven models can segment and recognize brain tumor magnetic resonance images. Their precision rates on the validation set are 92.5, 93.5, 91.2, 91.8, 89.6, 90.8, and 93.1%, respectively. The corresponding recall rates are 84, 85.3, 85.4, 84.7, 87.3, 85.4, and 91.9%. The best weight file of the model with the best evaluation index among the six trained models was tested on the test set, and the improved model significantly enhanced the image segmentation ability compared to the original model.
Compared with the original YOLOv5s model, among the five improved models, the improved YOLOv5s-ASPP model significantly enhanced the segmentation ability of brain tumor magnetic resonance images, which is helpful in assisting clinical diagnosis and treatment planning.
为在实现分割检测的同时辅助脑肿瘤类型的快速临床识别,本研究探讨将深度学习YOLOv5s算法模型应用于脑肿瘤磁共振图像分割的可行性,并在此基础上进行优化升级。
该研究机构利用了来自Kaggle的脑膜瘤和胶质瘤磁共振成像两个公共数据集。数据集1共有3223张图像,数据集2有216张图像。从数据集1中随机选取3000张图像,使用Labelimg工具标注图像内的癌灶区域。然后将这些图像按7:3的比例分为训练集和验证集。其余223张图像与数据集2最终分别用作内部测试集和外部测试集,以评估模型的分割效果。对原始YOLOv5算法进行了一系列优化,引入空洞空间金字塔池化(ASPP)、卷积块注意力模块(CBAM)、坐标注意力(CA)进行结构改进,得到了几个优化版本,即YOLOv5s-ASPP、YOLOv5s-CBAM、YOLOv5s-CA、YOLOv5s-ASPP-CBAM和YOLOv5s-ASPP-CA。将训练集和验证集输入到原始YOLOv5s模型、五个优化模型以及YOLOv8s模型中进行100轮迭代训练。在六个训练好的模型中,使用评估指标最佳的模型的最佳权重文件对测试集进行最终测试。
经过迭代训练,七个模型都能对脑肿瘤磁共振图像进行分割和识别。它们在验证集上的精确率分别为92.5%﹑﹑93.5%﹑91.2%﹑91.8%﹑89.6%﹑90.8%和93.1%。相应的召回率分别为84%﹑85.3%﹑85.4%﹑84.7%﹑87.3%﹑85.4%和91.9%。在六个训练好的模型中,使用评估指标最佳的模型的最佳权重文件对测试集进行测试,与原始模型相比,改进后的模型显著提高了图像分割能力。
与原始YOLOv5s模型相比,在五个改进模型中,改进后的YOLOv5s-ASPP模型显著提高了脑肿瘤磁共振图像的分割能力,有助于辅助临床诊断和治疗规划。