Prieto-González Leonar Steven, Agulles-Pedrós Luis
Department of Physics, Medical Physics Group, National University of Colombia, Campus Bogotá, Bogotá, Colombia.
J Med Phys. 2024 Apr-Jun;49(2):189-202. doi: 10.4103/jmp.jmp_10_24. Epub 2024 Jun 25.
This paper explores different machine learning (ML) algorithms for analyzing diffusion nuclear magnetic resonance imaging (dMRI) models when analytical fitting shows restrictions. It reviews various ML techniques for dMRI analysis and evaluates their performance on different -values range datasets, comparing them with analytical methods.
After standard fitting for reference, four sets of diffusion-weighted nuclear magnetic resonance images were used to train/test various ML algorithms for prediction of diffusion coefficient (D), pseudo-diffusion coefficient (D*), perfusion fraction (f), and kurtosis (K). ML classification algorithms, including extra-tree classifier (ETC), logistic regression, C-support vector, extra-gradient boost, and multilayer perceptron (MLP), were used to determine the existence of diffusion parameters (D, D*, f, and K) within single voxels. Regression algorithms, including linear regression, polynomial regression, ridge, lasso, random forest (RF), elastic-net, and support-vector machines, were used to estimate the value of the diffusion parameters. Performance was evaluated using accuracy (ACC), area under the curve (AUC) tests, and cross-validation root mean square error (RMSE). Computational timing was also assessed.
ETC and MLP were the best classifiers, with 94.1% and 91.7%, respectively, for the ACC test and 98.7% and 96.3% for the AUC test. For parameter estimation, RF algorithm yielded the most accurate results The RMSE percentages were: 8.39% for D, 3.57% for D*, 4.52% for f, and 3.53% for K. After the training phase, the ML methods demonstrated a substantial decrease in computational time, being approximately 232 times faster than the conventional methods.
The findings suggest that ML algorithms can enhance the efficiency of dMRI model analysis and offer new perspectives on the microstructural and functional organization of biological tissues. This paper also discusses the limitations and future directions of ML-based dMRI analysis.
本文探讨在解析拟合显示出局限性时,用于分析扩散核磁共振成像(dMRI)模型的不同机器学习(ML)算法。它回顾了用于dMRI分析的各种ML技术,并在不同取值范围的数据集上评估它们的性能,同时与解析方法进行比较。
在进行标准拟合作为参考之后,使用四组扩散加权核磁共振图像来训练/测试各种ML算法,以预测扩散系数(D)、伪扩散系数(D*)、灌注分数(f)和峰度(K)。ML分类算法,包括极端随机树分类器(ETC)、逻辑回归、C支持向量、极端梯度提升和多层感知器(MLP),用于确定单个体素内扩散参数(D、D*、f和K)的存在情况。回归算法,包括线性回归、多项式回归、岭回归、套索回归、随机森林(RF)、弹性网络和支持向量机,用于估计扩散参数的值。使用准确率(ACC)、曲线下面积(AUC)测试和交叉验证均方根误差(RMSE)来评估性能。还评估了计算时间。
ETC和MLP是最佳分类器,在ACC测试中分别为94.1%和91.7%,在AUC测试中分别为98. .7%和96.3%。对于参数估计,RF算法产生了最准确的结果。RMSE百分比分别为:D为8.39%,D*为3.57%,f为4.52%,K为3.53%。在训练阶段之后,ML方法的计算时间大幅减少,比传统方法快约232倍。
研究结果表明,ML算法可以提高dMRI模型分析的效率,并为生物组织的微观结构和功能组织提供新的视角。本文还讨论了基于ML的dMRI分析的局限性和未来方向。