Suppr超能文献

基于Transformer的多时间范围血糖预测模型的比较研究

A Comparative Study of Transformer-Based Models for Multi-Horizon Blood Glucose Prediction.

作者信息

Karagoz Meryem Altin, Breton Marc D, El Fathi Anas

机构信息

Center for Diabetes Technology, the University of Virginia, Charlottesville, VA, USA.

出版信息

ArXiv. 2025 May 12:arXiv:2505.08821v1.

Abstract

Accurate blood glucose prediction can enable novel interventions for type 1 diabetes treatment including personalized insulin and dietary adjustments. Although recent advances in transformer-based architectures have demonstrated the power of attention mechanisms in complex multivariate time series prediction, their potential for blood glucose (BG) prediction remains underexplored. We present a comparative analysis of transformer models for multi-horizon BG prediction, examining forecasts up to 4 hours and input history up to 1 week. The publicly available DCLP3 dataset (n=112) was split (80%-10%-10%) for training, validation, and testing, and the OhioT1DM dataset (n=12) served as an external test set. We trained networks with point-wise, patch-wise, series-wise and hybrid embeddings, using CGM, insulin, and meal data. For short-term blood glucose prediction, Crossformer, a patch-wise transformer architecture, achieved a superior 30 minute prediction of RMSE (15.6 mg / dL on OhioT1DM). For longer-term predictions (1h, 2h, and 4h) PatchTST, another path-wise transformer, prevailed with the lowest RMSE (24.6 mg/dL, 36.1 mg/dL, and 46.5 mg/dL on OhioT1DM). In general, models that used tokenization through patches demonstrated improved accuracy with larger input sizes, with the best results obtained with a one-week history. These findings highlight the promise of transformer-based architectures for BG prediction by capturing and leveraging seasonal patterns in multivariate time-series data to improve accuracy.

摘要

准确的血糖预测能够为1型糖尿病治疗带来新的干预措施,包括个性化胰岛素治疗和饮食调整。尽管基于Transformer架构的最新进展已证明注意力机制在复杂多元时间序列预测中的强大作用,但其在血糖(BG)预测方面的潜力仍未得到充分探索。我们对用于多步长BG预测的Transformer模型进行了比较分析,研究了长达4小时的预测和长达1周的输入历史数据。公开可用的DCLP3数据集(n = 112)按80%-10%-10%的比例划分用于训练、验证和测试,俄亥俄T1DM数据集(n = 12)用作外部测试集。我们使用连续血糖监测(CGM)、胰岛素和膳食数据,通过逐点、逐块、逐序列和混合嵌入来训练网络。对于短期血糖预测,逐块Transformer架构Crossformer在30分钟预测中实现了卓越性能,在俄亥俄T1DM数据集上的均方根误差(RMSE)为15.6 mg/dL。对于长期预测(1小时、2小时和4小时),另一种逐路径Transformer模型PatchTST表现最佳,在俄亥俄T1DM数据集上的RMSE最低,分别为24.6 mg/dL、36.1 mg/dL和46.5 mg/dL。总体而言,通过分块进行词元化的模型在输入规模较大时准确率更高,以一周历史数据获得的结果最佳。这些发现突出了基于Transformer架构在BG预测方面的前景,即通过捕捉和利用多元时间序列数据中的季节性模式来提高预测准确性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9b76/12132272/0339576fc77c/nihpp-2505.08821v1-f0001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验