Explainable artificial intelligence and interpretable machine learning for agricultural data analysis

被引:72
作者
Ryo, Masahiro [1 ,2 ]
机构
[1] Leibniz Ctr Agr Landscape Res ZALF, Eberswalder Str 84, D-15374 Muncheberg, Germany
[2] Brandenburg Univ Technol Cottbus Senftenberg, Pl Deutsch Einheit 1, D-03046 Cottbus, Germany
来源
ARTIFICIAL INTELLIGENCE IN AGRICULTURE | 2022年 / 6卷
关键词
Interpretable machine learning; Explainable artificial intelligence; Agriculture; Crop yield; No-tillage; XAI; NO-TILL; BLACK-BOX; MODELS; CROP;
D O I
10.1016/j.aiia.2022.11.003
中图分类号
S [农业科学];
学科分类号
09 ;
摘要
Artificial intelligence and machine learning have been increasingly applied for prediction in agricultural science. However, many models are typically black boxes, meaning we cannot explain what the models learned from the data and the reasons behind predictions. To address this issue, I introduce an emerging subdomain of artificial intelligence, explainable artificial intelligence (XAI), and associated toolkits, interpretable machine learning. This study demonstrates the usefulness of several methods by applying them to an openly available dataset. The dataset includes the no-tillage effect on crop yield relative to conventional tillage and soil, climate, and man-agement variables. Data analysis discovered that no-tillage management can increase maize crop yield where yield in conventional tillage is <5000 kg/ha and the maximum temperature is higher than 32 & DEG;. These methods are useful to answer (i) which variables are important for prediction in regression/classification, (ii) which var-iable interactions are important for prediction, (iii) how important variables and their interactions are associated with the response variable, (iv) what are the reasons underlying a predicted value for a certain instance, and (v) whether different machine learning algorithms offer the same answer to these questions. I argue that the goodness of model fit is overly evaluated with model performance measures in the current practice, while these questions are unanswered. XAI and interpretable machine learning can enhance trust and explainability in AI.& COPY; 2022 The Author. Publishing services by Elsevier B.V. on behalf of KeAi Communications Co., Ltd. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
引用
收藏
页码:257 / 265
页数:9
相关论文
共 55 条
[51]   Explainable Deep Learning Study for Leaf Disease Classification [J].
Wei, Kaihua ;
Chen, Bojian ;
Zhang, Jingcheng ;
Fan, Shanhui ;
Wu, Kaihua ;
Liu, Guangyu ;
Chen, Dongmei .
AGRONOMY-BASEL, 2022, 12 (05)
[52]  
Wickham H., 2019, J OPEN SOURCE SOFTW, V4, DOI [DOI 10.21105/JOSS.01686, 10.21105/joss.01686]
[53]   Estimating and understanding crop yields with explainable deep learning in the Indian Wheat Belt [J].
Wolanin, Aleksandra ;
Mateo-Garcia, Gonzalo ;
Camps-Valls, Gustau ;
Gomez-Chova, Luis ;
Meroni, Michele ;
Duveiller, Gregory ;
Liangzhi, You ;
Guanter, Luis .
ENVIRONMENTAL RESEARCH LETTERS, 2020, 15 (02)
[54]   Use of interpretable machine learning to identify the factors influencing the nonlinear linkage between land use and river water quality in the Chesapeake Bay watershed [J].
Zhang, Zhenyu ;
Huang, Jinliang ;
Duan, Shuiwang ;
Huang, Yaling ;
Cai, Juntao ;
Bian, Jing .
ECOLOGICAL INDICATORS, 2022, 140
[55]   Identification of Soil Texture Classes Under Vegetation Cover Based on Sentinel-2 Data With SVM and SHAP Techniques [J].
Zhou, Yanan ;
Wu, Wei ;
Wang, Huan ;
Zhang, Xin ;
Yang, Chao ;
Liu, Hongbin .
IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2022, 15 :3758-3770