Suppr超能文献

新颖性分类模型在宫颈癌强化学习中的应用

Novelty Classification Model Use in Reinforcement Learning for Cervical Cancer.

作者信息

Muksimova Shakhnoza, Umirzakova Sabina, Shoraimov Khusanboy, Baltayev Jushkin, Cho Young-Im

机构信息

Department of Computer Engineering, Gachon University, Sujeong-gu, Seongnam-si 461-701, Republic of Korea.

Department of Systematic and Practical Programming, Tashkent University of Information Technologies Named After Muhammad Al-Khwarizmi, Tashkent 100200, Uzbekistan.

出版信息

Cancers (Basel). 2024 Nov 10;16(22):3782. doi: 10.3390/cancers16223782.

Abstract

PURPOSE

Cervical cancer significantly impacts global health, where early detection is piv- otal for improving patient outcomes. This study aims to enhance the accuracy of cervical cancer diagnosis by addressing class imbalance through a novel hybrid deep learning model.

METHODS

The proposed model, RL-CancerNet, integrates EfficientNetV2 and Vision Transformers (ViTs) within a Reinforcement Learning (RL) framework. EfficientNetV2 extracts local features from cervical cytology images to capture fine-grained details, while ViTs analyze these features to recognize global dependencies across image patches. To address class imbalance, an RL agent dynamically adjusts the focus towards minority classes, thus reducing the common bias towards majority classes in medical image classification. Additionally, a Supporter Module incorporating Conv3D and BiLSTM layers with an attention mechanism enhances contextual learning.

RESULTS

RL-CancerNet was evaluated on the benchmark cervical cytology datasets Herlev and SipaKMeD, achieving an exceptional accuracy of 99.7%. This performance surpasses several state-of-the-art models, demonstrating the model's effectiveness in identifying subtle diagnostic features in complex backgrounds.

CONCLUSIONS

The integration of CNNs, ViTs, and RL into RL-CancerNet significantly improves the diagnostic accuracy of cervical cancer screenings. This model not only advances the field of automated medical screening but also provides a scalable framework adaptable to other medical imaging tasks, potentially enhancing diagnostic processes across various medical domains.

摘要

目的

宫颈癌对全球健康有重大影响,早期检测对于改善患者预后至关重要。本研究旨在通过一种新型混合深度学习模型解决类别不平衡问题,以提高宫颈癌诊断的准确性。

方法

所提出的模型RL-CancerNet在强化学习(RL)框架内集成了EfficientNetV2和视觉Transformer(ViT)。EfficientNetV2从宫颈细胞学图像中提取局部特征以捕捉细粒度细节,而ViT分析这些特征以识别图像块之间的全局依赖性。为了解决类别不平衡问题,一个RL智能体动态地将注意力转向少数类,从而减少医学图像分类中对多数类的常见偏差。此外,一个包含Conv3D和带有注意力机制的双向长短期记忆(BiLSTM)层的支持模块增强了上下文学习。

结果

RL-CancerNet在基准宫颈细胞学数据集Herlev和SipaKMeD上进行了评估,达到了99.7%的卓越准确率。这一性能超过了几个先进模型,证明了该模型在识别复杂背景下细微诊断特征方面的有效性。

结论

将卷积神经网络(CNN)、ViT和RL集成到RL-CancerNet中显著提高了宫颈癌筛查的诊断准确性。该模型不仅推动了自动医学筛查领域的发展,还提供了一个可扩展的框架,适用于其他医学成像任务,有可能改善各个医学领域的诊断过程。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a0ad/11592902/689f7ba9853f/cancers-16-03782-g001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验