Explainable artificial intelligence for reliable water demand forecasting to increase trust in predictions

被引:1
|
作者
Maussner, Claudia [1 ]
Oberascher, Martin [2 ]
Autengruber, Arnold [3 ]
Kahl, Arno [3 ]
Sitzenfrei, Robert [2 ]
机构
[1] Fraunhofer Austria Res GmbH KI4LIFE, Lakeside B13a, A-9020 Klagenfurt Am Worthersee, Austria
[2] Univ Innsbruck, Dept Infrastruct Engn, Unit Environm Engn, Tech Str 13, A-6020 Innsbruck, Austria
[3] Univ Innsbruck, Dept Publ Law Constitut & Adm Theory, Innrain 52d, A-6020 Innsbruck, Austria
关键词
Battle of water demand forecasting; EU AI act; Machine learning; Opaque; Transparent; Water supply system; XAI;
D O I
10.1016/j.watres.2024.122779
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
The "EU Artificial Intelligence Act" sets a framework for the implementation of artificial intelligence (AI) in Europe. As a legal assessment reveals, AI applications in water supply systems are categorised as high-risk AI if a failure in the AI application results in a significant impact on physical infrastructure or supply reliability. The use case of water demand forecasts with AI for automatic tank operation is for example categorised as high-risk AI and must fulfil specific requirements regarding model transparency (traceability, explainability) and technical robustness (accuracy, reliability). To this end, six widely established machine learning models, including both transparent and opaque models, are applied to different datasets for daily water demand forecasting and the requirements regarding model accuracy, transparency and technical robustness are systematically evaluated for this use case. Opaque models generally achieve higher prediction accuracy compared to transparent models due to their ability to capture the complex relationship between parameters like for example weather data and water demand. However, this also makes them vulnerable to deviations and irregularities in weather forecasts and historical water demand. In contrast, transparent models rely mainly on historical water demand data for the utilised dataset and are less influenced by weather data, making them more robust against various data irregularities. In summary, both transparent and opaque models can fulfil the requirements regarding explainability but differ in their level of transparency and robustness to input errors. The choice of model depends also on the operator's preferences and the context of the application.
引用
收藏
页数:12
相关论文
共 50 条
  • [31] Explainable Artificial Intelligence for Bayesian Neural Networks: Toward Trustworthy Predictions of Ocean Dynamics
    Clare, Mariana C. A.
    Sonnewald, Maike
    Lguensat, Redouane
    Deshayes, Julie
    Balaji, V
    JOURNAL OF ADVANCES IN MODELING EARTH SYSTEMS, 2022, 14 (11)
  • [32] Understanding machine learning predictions of wastewater treatment plant sludge with explainable artificial intelligence
    Bin Nasir, Fuad
    Li, Jin
    WATER ENVIRONMENT RESEARCH, 2024, 96 (10)
  • [33] Explainable artificial intelligence to increase transparency for revolutionizing healthcare ecosystem and the road ahead
    Roy, Sudipta
    Pal, Debojyoti
    Meena, Tanushree
    NETWORK MODELING AND ANALYSIS IN HEALTH INFORMATICS AND BIOINFORMATICS, 2023, 13 (01):
  • [34] CONFIDERAI: CONFormal Interpretable-by-Design score function for Explainable and Reliable Artificial Intelligence
    Narteni, Sara
    Carlevaro, Alberto
    Dabbene, Fabrizio
    Muselli, Marco
    Mongelli, Maurizio
    CONFORMAL AND PROBABILISTIC PREDICTION WITH APPLICATIONS, VOL 204, 2023, 204 : 485 - 487
  • [35] Explainable Artificial Intelligence for 6G: Improving Trust between Human and Machine
    Guo, Weisi
    IEEE COMMUNICATIONS MAGAZINE, 2020, 58 (06) : 39 - 45
  • [36] Learning to Comprehend and Trust Artificial Intelligence Outcomes: A Conceptual Explainable AI Evaluation Framework
    Love P.E.D.
    Matthews J.
    Fang W.
    Porter S.
    Luo H.
    Ding L.
    IEEE Engineering Management Review, 2024, 52 (01): : 230 - 247
  • [37] Artificial intelligence in the clinical setting Towards actual implementation of reliable outcome predictions
    Vistisen, Simon Tilma
    Pollard, Tom Joseph
    Harris, Steve
    Lauritsen, Simon Meyer
    EUROPEAN JOURNAL OF ANAESTHESIOLOGY, 2022, 39 (09) : 729 - 732
  • [38] An efficient mechanism for time series forecasting and anomaly detection using explainable artificial intelligence
    Iqbal, Amjad
    Amin, Rashid
    JOURNAL OF SUPERCOMPUTING, 2025, 81 (04):
  • [39] Explainable Artificial Intelligence for Crowd Forecasting Using Global Ensemble Echo State Networks
    Samarajeewa, Chamod
    De Silva, Daswin
    Manic, Milos
    Mills, Nishan
    Rathnayaka, Prabod
    Jennings, Andrew
    IEEE OPEN JOURNAL OF THE INDUSTRIAL ELECTRONICS SOCIETY, 2024, 5 : 415 - 427
  • [40] Energy Demand Forecasting: Combining Cointegration Analysis and Artificial Intelligence Algorithm
    Huang, Junbing
    Tang, Yuee
    Chen, Shuxing
    MATHEMATICAL PROBLEMS IN ENGINEERING, 2018, 2018