The Interpretability of Rule-based Modeling Approach and Its Development

被引:0
|
作者
Zhou Z.-J. [1 ]
Cao Y. [1 ]
Hu C.-H. [1 ]
Tang S.-W. [1 ]
Zhang C.-C. [1 ]
Wang J. [1 ]
机构
[1] Missile Engineering College, Rocket Force University of Engineering, Xi'an
来源
Zidonghua Xuebao/Acta Automatica Sinica | 2021年 / 47卷 / 06期
基金
中国国家自然科学基金;
关键词
Interpretability; Rule-based modeling approach; System modeling; Uncertainty;
D O I
10.16383/j.aas.c200402
中图分类号
学科分类号
摘要
The model interpretability refers to the ability to express the real system behavior in an understandable way. With the increasing of reliability requirements in engineering practice, establishing a reliable and interpretable model to enhance human understanding of real systems has become one of the main objectives. Rule-based modeling approach can describe the system mechanism more intuitively. It can not only effectively integrate quantitative information and qualitative knowledge, but can also deal with uncertain information flexibly. This paper combs researches on the interpretability of rule-based modeling approach around the knowledge base, inference engine and model optimization, and finally makes a brief review and prospect. Copyright © 2021 Acta Automatica Sinica. All rights reserved.
引用
收藏
页码:1201 / 1216
页数:15
相关论文
共 126 条
  • [1] Casillas J, Cordon O, Herrera F, Magdalena L., Interpretability Issues in Fuzzy Modeling, (2003)
  • [2] He R, Wu X, Sun Z N, Tan T N., Wasserstein CNN: Learning invariant features for NIR-VIS face recognition, IEEE Transactions on Pattern Analysis and Machine Intelligence, 41, 7, pp. 1761-1773, (2019)
  • [3] Miao K H, Miao J H., Coronary heart disease diagnosis using deep neural networks, International Journal of Advanced Computer Science and Applications, 9, 10, pp. 1-8, (2018)
  • [4] Ogunfunmi T, Ramachandran R P, Togneri R, Zhao Y J, Xia X J., A primer on deep learning architectures and applications in speech processing, Circuits, Systems, and Signal Processing, 38, 8, pp. 3406-3432, (2019)
  • [5] Hori T, Chen Z, Erdogan H, Hershey J R, Le Roux J, Mitra V, Et al., Multi-microphone speech recognition integrating beamforming, robust feature extraction, and advanced DNN/RNN backend, Computer Speech and Language, 46, pp. 401-418, (2017)
  • [6] Wyatt J., Nervous about artificial neural networks, The Lancet, 346, 8984, pp. 1175-1177, (1995)
  • [7] Walker C R, Frize M., Are Artificial Neural Networks "Ready to Use" for Decision Making in the Neonatal Intensive Care Unit?: Commentary on the article by Mueller et al. and page 11, Pediatric Research, 56, 1, pp. 6-8, (2004)
  • [8] Gacto M J, Alcala R, Herrera F., Interpretability of linguistic fuzzy rule-based systems: An overview of interpretability measures, Information Sciences, 181, 20, pp. 4340-4360, (2011)
  • [9] Guillaume S., Designing fuzzy inference systems from data: An interpretability-oriented review, IEEE Transactions on Fuzzy Systems, 9, 3, pp. 426-443, (2001)
  • [10] Zhou S M, Gan J Q., Low-level interpretability and high-level interpretability: A unified view of data-driven interpretable fuzzy system modelling, Fuzzy Sets and Systems, 159, 23, pp. 3091-3131, (2008)