Suppr超能文献

用于生成预测区间的双精度-质量驱动神经网络。

Dual Accuracy-Quality-Driven Neural Network for Prediction Interval Generation.

作者信息

Morales Giorgio, Sheppard John W

出版信息

IEEE Trans Neural Netw Learn Syst. 2025 Feb;36(2):2843-2853. doi: 10.1109/TNNLS.2023.3339470. Epub 2025 Feb 6.

Abstract

Accurate uncertainty quantification is necessary to enhance the reliability of deep learning (DL) models in real-world applications. In the case of regression tasks, prediction intervals (PIs) should be provided along with the deterministic predictions of DL models. Such PIs are useful or "high-quality (HQ)" as long as they are sufficiently narrow and capture most of the probability density. In this article, we present a method to learn PIs for regression-based neural networks (NNs) automatically in addition to the conventional target predictions. In particular, we train two companion NNs: one that uses one output, the target estimate, and another that uses two outputs, the upper and lower bounds of the corresponding PI. Our main contribution is the design of a novel loss function for the PI-generation network that takes into account the output of the target-estimation network and has two optimization objectives: minimizing the mean PI width and ensuring the PI integrity using constraints that maximize the PI probability coverage implicitly. Furthermore, we introduce a self-adaptive coefficient that balances both objectives within the loss function, which alleviates the task of fine-tuning. Experiments using a synthetic dataset, eight benchmark datasets, and a real-world crop yield prediction dataset showed that our method was able to maintain a nominal probability coverage and produce significantly narrower PIs without detriment to its target estimation accuracy when compared to those PIs generated by three state-of-the-art neural-network-based methods. In other words, our method was shown to produce higher quality PIs.

摘要

准确的不确定性量化对于提高深度学习(DL)模型在实际应用中的可靠性至关重要。在回归任务中,除了提供DL模型的确定性预测外,还应给出预测区间(PI)。只要这些PI足够窄并能捕获大部分概率密度,它们就是有用的或“高质量(HQ)”的。在本文中,我们提出了一种方法,除了传统的目标预测外,还能自动为基于回归的神经网络(NN)学习PI。具体来说,我们训练两个配套的神经网络:一个使用一个输出,即目标估计值;另一个使用两个输出,即相应PI的上下界。我们的主要贡献在于为PI生成网络设计了一种新颖的损失函数,该函数考虑了目标估计网络的输出,并具有两个优化目标:最小化平均PI宽度,并使用隐式最大化PI概率覆盖率的约束来确保PI完整性。此外,我们引入了一个自适应系数,用于平衡损失函数中的两个目标,从而减轻了微调的任务。使用合成数据集、八个基准数据集和一个实际作物产量预测数据集进行的实验表明,与三种基于神经网络的先进方法生成的PI相比,我们的方法能够保持名义概率覆盖率,并生成明显更窄的PI,同时不损害其目标估计精度。换句话说,我们的方法被证明能够生成更高质量的PI。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验