• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

Explainable artificial intelligence for reliable water demand forecasting to increase trust in predictions.

作者信息

Maußner Claudia, Oberascher Martin, Autengruber Arnold, Kahl Arno, Sitzenfrei Robert

机构信息

Fraunhofer Austria Research GmbH KI4LIFE, Lakeside B13a, 9020 Klagenfurt am Wörthersee, Austria.

Unit of Environmental Engineering, Department of Infrastructure Engineering, University of Innsbruck, Technikerstraße 13, Innsbruck 6020, Austria.

出版信息

Water Res. 2025 Jan 1;268(Pt B):122779. doi: 10.1016/j.watres.2024.122779. Epub 2024 Nov 9.

DOI:10.1016/j.watres.2024.122779
PMID:39546974
Abstract

The "EU Artificial Intelligence Act" sets a framework for the implementation of artificial intelligence (AI) in Europe. As a legal assessment reveals, AI applications in water supply systems are categorised as high-risk AI if a failure in the AI application results in a significant impact on physical infrastructure or supply reliability. The use case of water demand forecasts with AI for automatic tank operation is for example categorised as high-risk AI and must fulfil specific requirements regarding model transparency (traceability, explainability) and technical robustness (accuracy, reliability). To this end, six widely established machine learning models, including both transparent and opaque models, are applied to different datasets for daily water demand forecasting and the requirements regarding model accuracy, transparency and technical robustness are systematically evaluated for this use case. Opaque models generally achieve higher prediction accuracy compared to transparent models due to their ability to capture the complex relationship between parameters like for example weather data and water demand. However, this also makes them vulnerable to deviations and irregularities in weather forecasts and historical water demand. In contrast, transparent models rely mainly on historical water demand data for the utilised dataset and are less influenced by weather data, making them more robust against various data irregularities. In summary, both transparent and opaque models can fulfil the requirements regarding explainability but differ in their level of transparency and robustness to input errors. The choice of model depends also on the operator's preferences and the context of the application.

摘要

相似文献

1
Explainable artificial intelligence for reliable water demand forecasting to increase trust in predictions.
Water Res. 2025 Jan 1;268(Pt B):122779. doi: 10.1016/j.watres.2024.122779. Epub 2024 Nov 9.
2
Population Preferences for Performance and Explainability of Artificial Intelligence in Health Care: Choice-Based Conjoint Survey.人群对医疗人工智能性能和可解释性的偏好:基于选择的联合调查。
J Med Internet Res. 2021 Dec 13;23(12):e26611. doi: 10.2196/26611.
3
Understanding machine learning predictions of wastewater treatment plant sludge with explainable artificial intelligence.利用可解释人工智能理解污水处理厂污泥的机器学习预测。
Water Environ Res. 2024 Oct;96(10):e11136. doi: 10.1002/wer.11136.
4
Explainable AI for Bioinformatics: Methods, Tools and Applications.可解释人工智能在生物信息学中的应用:方法、工具与应用。
Brief Bioinform. 2023 Sep 20;24(5). doi: 10.1093/bib/bbad236.
5
Exploring the Applications of Explainability in Wearable Data Analytics: Systematic Literature Review.探索可解释性在可穿戴数据分析中的应用:系统文献综述
J Med Internet Res. 2024 Dec 24;26:e53863. doi: 10.2196/53863.
6
Can surgeons trust AI? Perspectives on machine learning in surgery and the importance of eXplainable Artificial Intelligence (XAI).外科医生能信任人工智能吗?关于外科手术中机器学习的观点以及可解释人工智能(XAI)的重要性。
Langenbecks Arch Surg. 2025 Jan 28;410(1):53. doi: 10.1007/s00423-025-03626-7.
7
Systematic literature review on the application of explainable artificial intelligence in palliative care studies.关于可解释人工智能在姑息治疗研究中应用的系统文献综述。
Int J Med Inform. 2025 Aug;200:105914. doi: 10.1016/j.ijmedinf.2025.105914. Epub 2025 Apr 8.
8
Smart Vision Transparency: Efficient Ocular Disease Prediction Model Using Explainable Artificial Intelligence.智能视觉透明性:使用可解释人工智能的高效眼部疾病预测模型。
Sensors (Basel). 2024 Oct 14;24(20):6618. doi: 10.3390/s24206618.
9
DeepXplainer: An interpretable deep learning based approach for lung cancer detection using explainable artificial intelligence.深演析:一种基于可解释人工智能的用于肺癌检测的可解释深度学习方法。
Comput Methods Programs Biomed. 2024 Jan;243:107879. doi: 10.1016/j.cmpb.2023.107879. Epub 2023 Oct 24.
10
Explainable artificial intelligence and machine learning: novel approaches to face infectious diseases challenges.可解释人工智能和机器学习:应对面部传染病挑战的新方法。
Ann Med. 2023;55(2):2286336. doi: 10.1080/07853890.2023.2286336. Epub 2023 Nov 27.