Noaro G, Cappon G, Sparacino G, Del Favero S, Facchinetti A
Annu Int Conf IEEE Eng Med Biol Soc. 2020 Jul;2020:5502-5505. doi: 10.1109/EMBC44109.2020.9176021.
Type 1 diabetes (T1D) therapy requires multiple daily insulin injections to compensate the lack of endogenous insulin production due to β-cells destruction. An empirical standard formula (SF) is commonly used for such a task. Unfortunately, SF does not include information on glucose dynamics, e.g. the glucose rate-of-change (ROC) provided by continuous glucose monitoring (CGM) sensor. Hence, SF can sometimes lead to under/overestimations that can cause critical hypo/hyperglycemic episodes during/after the meal. Recently, to overcome this limitation, we proposed new linear regression models, integrating ROC information and personalized features. Despite the first encouraging results, the nonlinear nature of the problem calls for the application of nonlinear models. In this work, random forest (RF) and gradient boosting tree (GBT), nonlinear machine learning methodologies, were investigated. A dataset of 100 virtual subjects, opportunely divided into training and testing sets, was used. For each individual, a single-meal scenario with different meal conditions (preprandial ROC, BG and meal amounts) was simulated. The assessment was performed both in terms of accuracy in estimating the optimal bolus and glycemic control. Results were compared to the best performing linear model previously developed. The two tree-based models proposed lead to a statistically significant improvement of glycemic control compared to the linear approach, reducing the time spent in hypoglycemia (from 32.49% to 27.57-25.20% for RF and GBT, respectively). These results represent a preliminary step to prove that nonlinear machine learning techniques can improve the estimation of insulin bolus in T1D therapy. Particularly, RF and GBT were shown to outperform the previously linear models proposed.Clinical Relevance- Insulin bolus estimation with nonlinear machine learning techniques reduces the risk of adverse events in T1D therapy.
1型糖尿病(T1D)的治疗需要每天多次注射胰岛素,以补偿由于β细胞破坏而导致的内源性胰岛素分泌不足。通常使用经验标准公式(SF)来完成这项任务。不幸的是,SF没有包含葡萄糖动态信息,例如连续葡萄糖监测(CGM)传感器提供的葡萄糖变化率(ROC)。因此,SF有时会导致估计不足/过度估计,从而在进餐期间/之后引发严重的低血糖/高血糖事件。最近,为了克服这一局限性,我们提出了新的线性回归模型,将ROC信息和个性化特征整合在一起。尽管取得了初步的令人鼓舞的结果,但该问题的非线性性质要求应用非线性模型。在这项工作中,我们研究了随机森林(RF)和梯度提升树(GBT)这两种非线性机器学习方法。使用了一个包含100个虚拟受试者的数据集,并将其适当地分为训练集和测试集。对于每个个体,模拟了具有不同进餐条件(餐前ROC、血糖水平和进餐量)的单餐情况。评估从估计最佳推注剂量的准确性和血糖控制两个方面进行。将结果与先前开发的表现最佳的线性模型进行了比较。与线性方法相比,所提出的两种基于树的模型在血糖控制方面有统计学意义的显著改善,减少了低血糖持续时间(RF和GBT分别从32.49%降至27.57% - 25.20%)。这些结果代表了证明非线性机器学习技术可以改善T1D治疗中胰岛素推注剂量估计的初步步骤。特别是,RF和GBT被证明优于先前提出的线性模型。临床意义——使用非线性机器学习技术进行胰岛素推注剂量估计可降低T1D治疗中不良事件的风险。