Decision Basis and Reliability Analysis of Object Detection Model

被引:0
作者
Ping Y.-K. [1 ]
Huang H.-Y. [2 ]
Jiang H. [3 ]
Ding Z.-H. [1 ]
机构
[1] School of Information Science and Technology, Zhejiang Sci-Tech University, Hangzhou
[2] Library, Zhejiang Sci-Tech University, Hangzhou
[3] School of Software Technology, Dalian University of Technology, Dalian
来源
Ruan Jian Xue Bao/Journal of Software | 2022年 / 33卷 / 09期
关键词
deep learning; interpretability; machine learning; object detection; reliability analysis;
D O I
10.13328/j.cnki.jos.006640
中图分类号
学科分类号
摘要
The object detection model has been widely applied in many fields; however, as a machine learning model, it remains a black box to humans. Interpreting the model is conducive to a better understanding of the model and can help judge whether the model is reliable. In view of the interpretability problem of the object detection model, this study proposes that the output of the model should be changed into a specific regression problem that focuses on the existence possibility of the objects of each class. On this basis, the methods to analyze the decision basis and reliability of the object detection model are put forward. Due to the poor versatility of the original image segmentation method, LIME generates unfaithful and ineffective interpretations when interpreting the object detection model. Therefore, the image segmentation method with LIME replaced by DeepLab is put forward and improved, and the improved method can interpret the object detection model. The experiment results prove the superiority of the improved method in interpreting the object detection model. © 2022 Chinese Academy of Sciences. All rights reserved.
引用
收藏
相关论文
共 29 条
  • [1] Ribeiro MT, Singh S, Guestrin C., Why should I trust you?”: Explaining the predictions of any classifier, Proc. of the 22nd ACM SIGKDD Int’l Conf. on Knowledge Discovery and Data Mining, pp. 1135-1144, (2016)
  • [2] Pedreschi D, Giannotti F, Guidotti R, Monreale A, Ruggieri S, Turini F., Meaningful explanations of black box AI decision systems, Proc. of the 33rd AAAI Conf. on Artificial Intelligence and 31st Innovative Applications of Artificial Intelligence Conf. and 9th AAAI Symp. on Educational Advances in Artificial Intelligence, (2019)
  • [3] Lundberg SM, Lee SI., A unified approach to interpreting model predictions, Proc. of the 31st Int’l Conf. on Neural Information Processing Systems, pp. 4768-4774, (2017)
  • [4] Ribeiro MT, Singh S, Guestrin C., Anchors: High-precision model-agnostic explanations, Proc. of the 32nd AAAI Conf. on Artificial Intelligence and 13th Innovative Applications of Artificial Intelligence Conf. and 8th AAAI Symp. on Educational Advances in Artificial Intelligence, (2018)
  • [5] Lakkaraju H, Kamar E, Caruana R, Leskovec J., Faithful and customizable explanations of black box models, Proc. of the 2019 AAAI/ACM Conf. on AI, Ethics, and Society, pp. 131-138, (2019)
  • [6] Bach S, Binder A, Montavon G, Klauschen F, Muller KR, Samek W., On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation, PLoS One, 10, 7, (2015)
  • [7] Shrikumar A, Greenside P, Kundaje A., Learning important features through propagating activation differences, Proc. of the 34th Int’l Conf. on Machine Learning, pp. 3145-3153, (2017)
  • [8] Amini A, Schwarting W, Soleimany A, Rus D., Deep evidential regression, Proc. of the 34th Int’l Conf. on Neural Information Processing Systems, (2020)
  • [9] Camburu OM., Explaining deep neural networks, (2020)
  • [10] Fan M, Wei WY, Xie XF, Liu Y, Guan XH, Liu T., Can we trust your explanations? Sanity checks for interpreters in android malware analysis, IEEE Trans. on Information Forensics and Security, 16, pp. 838-853, (2020)