Zuo Qiankun, Shi Zhengkun, Liu Bo, Ping Na, Wang Jiangtao, Cheng Xi, Zhang Kexin, Guo Jia, Wu Yixian, Hong Jin
Hubei Key Laboratory of Digital Finance Innovation, Hubei University of Economics, Wuhan, China.
School of Information Engineering, Hubei University of Economics, Wuhan, China.
Front Cell Dev Biol. 2024 Oct 11;12:1484880. doi: 10.3389/fcell.2024.1484880. eCollection 2024.
Retinal diseases significantly impact patients' quality of life and increase social medical costs. Optical coherence tomography (OCT) offers high-resolution imaging for precise detection and monitoring of these conditions. While deep learning techniques have been employed to extract features from OCT images for classification, convolutional neural networks (CNNs) often fail to capture global context due to their focus on local receptive fields. Transformer-based methods, on the other hand, suffer from quadratic complexity when handling long-range dependencies.
To overcome these limitations, we introduce the Multi-Resolution Visual Mamba (MRVM) model, which addresses long-range dependencies with linear computational complexity for OCT image classification. The MRVM model initially employs convolution to extract local features and subsequently utilizes the retinal Mamba to capture global dependencies. By integrating multi-scale global features, the MRVM enhances classification accuracy and overall performance. Additionally, the multi-directional selection mechanism (MSM) within the retinal Mamba improves feature extraction by concentrating on various directions, thereby better capturing complex, orientation-specific retinal patterns.
Experimental results demonstrate that the MRVM model excels in differentiating retinal images with various lesions, achieving superior detection accuracy compared to traditional methods, with overall accuracies of 98.98% and 96.21% on two public datasets, respectively.
This approach offers a novel perspective for accurately identifying retinal diseases and could contribute to the development of more robust artificial intelligence algorithms and recognition systems for medical image-assisted diagnosis.
视网膜疾病严重影响患者的生活质量,并增加社会医疗成本。光学相干断层扫描(OCT)提供高分辨率成像,用于精确检测和监测这些病症。虽然深度学习技术已被用于从OCT图像中提取特征进行分类,但卷积神经网络(CNN)由于专注于局部感受野,往往无法捕捉全局上下文。另一方面,基于Transformer的方法在处理长程依赖时具有二次复杂度。
为克服这些限制,我们引入了多分辨率视觉曼巴(MRVM)模型,该模型以线性计算复杂度处理长程依赖,用于OCT图像分类。MRVM模型首先采用卷积提取局部特征,随后利用视网膜曼巴捕捉全局依赖。通过整合多尺度全局特征,MRVM提高了分类准确率和整体性能。此外,视网膜曼巴中的多方向选择机制(MSM)通过关注不同方向来改进特征提取,从而更好地捕捉复杂的、特定方向的视网膜模式。
实验结果表明,MRVM模型在区分具有各种病变的视网膜图像方面表现出色,与传统方法相比具有更高的检测准确率,在两个公共数据集上的总体准确率分别为98.98%和96.21%。
这种方法为准确识别视网膜疾病提供了新的视角,并可能有助于开发更强大的人工智能算法和医学图像辅助诊断识别系统。