Department of Computer Science and Engineering, University of West Bohemia, Univerzitni 8, 301 00, Plzen, Czech Republic.
Department of Computer Science, University of Waikato, New Zealand.
Comput Biol Med. 2022 Jun;145:105388. doi: 10.1016/j.compbiomed.2022.105388. Epub 2022 Mar 19.
Diabetes mellitus manifests as prolonged elevated blood glucose levels resulting from impaired insulin production. Such high glucose levels over a long period of time damage multiple internal organs. To mitigate this condition, researchers and engineers have developed the closed loop artificial pancreas consisting of a continuous glucose monitor and an insulin pump connected via a microcontroller or smartphone. A problem, however, is how to accurately predict short term future glucose levels in order to exert efficient glucose-level control. Much work in the literature focuses on least prediction error as a key metric and therefore pursues complex prediction methods such a deep learning. Such an approach neglects other important and significant design issues such as method complexity (impacting interpretability and safety), hardware requirements for low-power devices such as the insulin pump, the required amount of input data for training (potentially rendering the method infeasible for new patients), and the fact that very small improvements in accuracy may not have significant clinical benefit.
We propose a novel low-complexity, explainable blood glucose prediction method derived from the Intel P6 branch predictor algorithm. We use Meta-Differential Evolution to determine predictor parameters on training data splits of the benchmark datasets we use. A comparison is made between our new algorithm and a state-of-the-art deep-learning method for blood glucose level prediction.
To evaluate the new method, the Blood Glucose Level Prediction Challenge benchmark dataset is utilised. On the official test data split after training, the state-of-the-art deep learning method predicted glucose levels 30 min ahead of current time with 96.3% of predicted glucose levels having relative error less than 30% (which is equivalent to the safe zone of the Surveillance Error Grid). Our simpler, interpretable approach prolonged the prediction horizon by another 5 min with 95.8% of predicted glucose levels of all patients having relative error less than 30%.
When considering predictive performance as assessed using the Blood Glucose Level Prediction Challenge benchmark dataset and Surveillance Error Grid metrics, we found that the new algorithm delivered comparable predictive accuracy performance, while operating only on the glucose-level signal with considerably less computational complexity.
糖尿病表现为由于胰岛素产生受损而导致的长期高血糖水平。这种高血糖水平持续很长时间会损害多个内部器官。为了减轻这种情况,研究人员和工程师开发了闭环人工胰腺,由连续血糖监测器和胰岛素泵组成,通过微控制器或智能手机连接。然而,一个问题是如何准确预测短期未来的血糖水平,以便进行有效的血糖水平控制。文献中的许多工作都集中在最小预测误差作为关键指标上,因此采用了复杂的预测方法,如深度学习。这种方法忽略了其他重要和显著的设计问题,如方法的复杂性(影响可解释性和安全性)、胰岛素泵等低功耗设备的硬件要求、训练所需的输入数据量(可能使该方法对新患者不可行),以及准确性的微小提高可能没有显著的临床益处这一事实。
我们提出了一种新的低复杂度、可解释的血糖预测方法,该方法源自 Intel P6 分支预测器算法。我们使用元差分进化算法在我们使用的基准数据集的训练数据分割上确定预测器参数。我们比较了我们的新算法和用于血糖水平预测的最先进的深度学习方法。
为了评估新方法,利用了血糖水平预测挑战基准数据集。在训练后的官方测试数据分割上,最先进的深度学习方法预测当前时间 30 分钟前的血糖水平,96.3%的预测血糖水平的相对误差小于 30%(相当于监测误差网格的安全区)。我们更简单、可解释的方法将预测时间延长了 5 分钟,所有患者的 95.8%的预测血糖水平的相对误差都小于 30%。
当考虑使用血糖水平预测挑战基准数据集和监测误差网格指标评估的预测性能时,我们发现新算法在计算复杂度大大降低的情况下,提供了可比的预测准确性性能。