Department of Biomedical Engineering, College of Engineering and Technology, SRM Institute of Science and Technology, Kattankulathur, Tamil Nadu, 603203, India.
Department of Biomedical Engineering, Saveetha School of Engineering, Saveetha Institute of Medical and Technical Sciences, Chennai, Tamil Nadu, 602105, India.
Sci Rep. 2024 Jun 24;14(1):14571. doi: 10.1038/s41598-024-64150-0.
The study aimed to achieve the following objectives: (1) to perform the fusion of thermal and visible tongue images with various fusion rules of discrete wavelet transform (DWT) to classify diabetes and normal subjects; (2) to obtain the statistical features in the required region of interest from the tongue image before and after fusion; (3) to distinguish the healthy and diabetes using fused tongue images based on deep and machine learning algorithms. The study participants comprised of 80 normal subjects and age- and sex-matched 80 diabetes patients. The biochemical tests such as fasting glucose, postprandial, Hba1c are taken for all the participants. The visible and thermal tongue images are acquired using digital single lens reference camera and thermal infrared cameras, respectively. The digital and thermal tongue images are fused based on the wavelet transform method. Then Gray level co-occurrence matrix features are extracted individually from the visible, thermal, and fused tongue images. The machine learning classifiers and deep learning networks such as VGG16 and ResNet50 was used to classify the normal and diabetes mellitus. Image quality metrics are implemented to compare the classifiers' performance before and after fusion. Support vector machine outperformed the machine learning classifiers, well after fusion with an accuracy of 88.12% compared to before the fusion process (Thermal-84.37%; Visible-63.1%). VGG16 produced the classification accuracy of 94.37% after fusion and attained 90.62% and 85% before fusion of individual thermal and visible tongue images, respectively. Therefore, this study results indicates that fused tongue images might be used as a non-contact elemental tool for pre-screening type II diabetes mellitus.
(1)采用离散小波变换(DWT)的各种融合规则对热舌图像和可视舌图像进行融合,以对糖尿病患者和健康人进行分类;(2)从融合前后的舌图像中获取感兴趣区域的统计特征;(3)利用融合舌图像和深度学习及机器学习算法对健康人和糖尿病患者进行区分。研究对象包括 80 名健康人和 80 名年龄和性别相匹配的糖尿病患者。所有参与者均进行了空腹血糖、餐后血糖、Hba1c 等生化检测。采用数码单镜头参考相机和热红外相机分别采集可视舌图像和热舌图像。基于小波变换方法对数字舌图像和热舌图像进行融合。然后分别从可视舌图像、热舌图像和融合舌图像中提取灰度共生矩阵特征。利用机器学习分类器和深度学习网络(如 VGG16 和 ResNet50)对正常人和糖尿病患者进行分类。实施图像质量指标以比较融合前后分类器的性能。支持向量机的表现优于机器学习分类器,融合后的准确率为 88.12%,明显高于融合前的准确率(热图:84.37%;可视图:63.1%)。融合后,VGG16 的分类准确率为 94.37%,融合前可视舌图像和热舌图像的分类准确率分别为 90.62%和 85%。因此,本研究结果表明,融合舌图像可作为一种非接触式的元素工具,用于初步筛选 2 型糖尿病。