Streamlit-based enhancing crop recommendation systems with advanced explainable artificial intelligence for smart farming

被引:13
作者
Akkem, Yaganteeswarudu [1 ]
Biswas, Saroj Kumar [1 ]
Varanasi, Aruna [2 ]
机构
[1] National Institute of Technology, Silchar
[2] Sreenidhi Institute of Science and Technology, Hyderabad
基金
英国科研创新办公室;
关键词
GDPR; LIME; Machine learning; SHAP; Smart farming; XAI;
D O I
10.1007/s00521-024-10208-z
中图分类号
学科分类号
摘要
The main objective of this paper is to clarify the importance of explainability in the crop recommendation process and provide insights on how Explainable Artificial Intelligence (XAI) can be incorporated into existing models successfully. The objective is to increase the definition and transparency of the recommendations implemented by AI in smart agriculture, leading to a detailed analysis of the synchronization between crop recommendation systems and XAI that informs decisions as it has sustainable knowledge and practices in modern agriculture. It reviews state-of-the-art XAI techniques such as local interpretable model-agnostic interpretation (LIME), SHapley interpretation additive approach (SHAP), integrated gradients (IG), and level-wise relevance propagation (LRP). It focuses on interpretable models and critical features analysis, and XAI methods are discussed in terms of their applications, critical features, and definitions. The paper found that XAI methods such as LIME and SHAP can make AI-driven crop recommendation systems more transparent and reliable. Graphical techniques such as dependency plots, summary plots, waterfall graphs, and decision plots effectively analyze feature importance. The paper includes counterfactual explanations using dice ml and hearing with advanced techniques combining IG and LRP to provide in-depth narrative model behavior. The novelty of this study lies in a detailed investigation of how XAI can be incorporated into crop recommendation systems to address the “black box” nature of AI models. It uses a unique XAI technique and model approach to make AI-driven recommendations more meaningful and practical for farmers. The proposed systems and techniques are designed to consume agriculture, addressing the specific needs of intelligent systems, making this research a significant contribution to agricultural AI. © The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2024.
引用
收藏
页码:20011 / 20025
页数:14
相关论文
共 42 条
  • [1] Ribeiro M., Singh S., Guestrin C., Why Should I Trust You?”: Explaining the predictions of any classifier, Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations, pp. 97-101, (2016)
  • [2] Rawal A., McCoy J., Rawat D.B., Sadler B.M., Amant R.S., Recent advances in trustworthy explainable artificial intelligence: status, challenges, and perspectives, IEEE Trans Artific Intell, 3, 6, pp. 852-866, (2022)
  • [3] Sabrina F., Sohail S., Farid F., Jahan S., Ahamed F., Et al., An interpretable artificial intelligence based smart agriculture system, Comput Mater Continua, 72, 2, pp. 3777-3797, (2022)
  • [4] Dwivedi R., Dave D., Naik H., Singhal S., Omer R., Patel P., Qian B., Wen Z., Shah T., Morgan G., Ranjan R., Explainable AI (XAI): core ideas, techniques, and solutions, ACM Comput Surv, 55, 9, pp. 1-33, (2023)
  • [5] Minh D., Wang H.X., Li Y.F., Et al., Explainable artificial intelligence: a comprehensive review, Artif Intell Rev, 55, pp. 3503-3568, (2022)
  • [6] Haar L.V., Elvira T., Ochoa O., An analysis of explainability methods for convolutional neural networks, Eng Appl Artific Intell, 117, (2023)
  • [7] Antoniadi A.M., Du Y., Guendouz Y., Wei L., Mazo C., Becker B.A., Mooney C., Current challenges and future opportunities for XAI in machine learning-based clinical decision support systems: a systematic review, Appl Sci, 11, 11, (2021)
  • [8] Byrne R.M.J., Counterfactuals in explainable artificial intelligence (XAI): Evidence from human reasoning, In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence Survey Track, pp. 6276-6282, (2019)
  • [9] Cyras K., Rago A., Albini E., Baroni P., Toni F., Argumentative XAI: A survey, 30Th International Joint Conference on Artificial Intelligence, pp. 4392-4399, (2021)
  • [10] Ehsan U., Vera Liao Q., Muller M., Riedl M.O., Weisz J.D., Expanding explainability: Towards social transparency in AI systems, Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI ‘21), 82, pp. 1-19, (2021)