Explaining Black Box Drug Target Prediction Through Model Agnostic Counterfactual Samples

被引:2
|
作者
Nguyen, Tri Minh [1 ]
Quinn, Thomas P. [1 ]
Nguyen, Thin [1 ]
Tran, Truyen [1 ]
机构
[1] Deakin Univ, Appl Artificial Intelligence Inst, Burwood, Vic 3217, Australia
关键词
Drugs; Proteins; Predictive models; Biological system modeling; Reinforcement learning; Deep learning; Computational modeling; Black box deep learning; counterfactual explanation; drug-target affinity; substructure interaction; PDBBIND DATABASE;
D O I
10.1109/TCBB.2022.3190266
中图分类号
Q5 [生物化学];
学科分类号
071010 ; 081704 ;
摘要
Many high-performance DTA deep learning models have been proposed, but they are mostly black-box and thus lack human interpretability. Explainable AI (XAI) can make DTA models more trustworthy, and allows to distill biological knowledge from the models. Counterfactual explanation is one popular approach to explaining the behaviour of a deep neural network, which works by systematically answering the question "How would the model output change if the inputs were changed in this way?". We propose a multi-agent reinforcement learning framework, Multi-Agent Counterfactual Drug-target binding Affinity (MACDA), to generate counterfactual explanations for the drug-protein complex. Our proposed framework provides human-interpretable counterfactual instances while optimizing both the input drug and target for counterfactual generation at the same time. We benchmark the proposed MACDA framework using the Davis and PDBBind dataset and find that our framework produces more parsimonious explanations with no loss in explanation validity, as measured by encoding similarity. We then present a case study involving ABL1 and Nilotinib to demonstrate how MACDA can explain the behaviour of a DTA model in the underlying substructure interaction between inputs in its prediction, revealing mechanisms that align with prior domain knowledge.
引用
收藏
页码:1020 / 1029
页数:10
相关论文
共 50 条
  • [21] PLENARY: Explaining black-box models in natural language through fuzzy linguistic summaries
    Kaczmarek-Majer, Katarzyna
    Casalino, Gabriella
    Castellano, Giovanna
    Dominiak, Monika
    Hryniewicz, Olgierd
    Kaminska, Olga
    Vessio, Gennaro
    Diaz-Rodriguez, Natalia
    INFORMATION SCIENCES, 2022, 614 : 374 - 399
  • [22] Explaining image enhancement black-box methods through a path planning based algorithm
    Cotogni, Marco
    Cusano, Claudio
    MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (03) : 8043 - 8062
  • [23] Explaining a Black-Box Sentiment Analysis Model with Local Interpretable Model Diagnostics Explanation (LIME)
    Chowdhury, Kounteyo Roy
    Sil, Arpan
    Shukla, Sharvari Rahul
    ADVANCES IN COMPUTING AND DATA SCIENCES, PT I, 2021, 1440 : 90 - 101
  • [24] Explaining the black-box model: A survey of local interpretation methods for deep neural networks
    Liang, Yu
    Li, Siguang
    Yan, Chungang
    Li, Maozhen
    Jiang, Changjun
    NEUROCOMPUTING, 2021, 419 : 168 - 182
  • [25] Explaining Individual and Collective Programming Students' Behavior by Interpreting a Black-Box Predictive Model
    Pereira, Filipe Dwan
    Fonseca, Samuel C.
    Oliveira, Elaine H. T.
    Cristea, Alexandra, I
    Bellhauser, Henrik
    Rodrigues, Luiz
    Oliveira, David B. F.
    Isotani, Seiji
    Carvalho, Leandro S. G.
    IEEE ACCESS, 2021, 9 : 117097 - 117119
  • [26] AcME-Accelerated model-agnostic explanations: Fast whitening of the machine-learning black box
    Dandolo, David
    Masiero, Chiara
    Carletti, Mattia
    Pezze, Davide Dalle
    Susto, Gian Antonio
    EXPERT SYSTEMS WITH APPLICATIONS, 2023, 214
  • [27] Investigating Black-Box Model for Wind Power Forecasting Using Local Interpretable Model-Agnostic Explanations Algorithm
    Yang, Mao
    Xu, Chuanyu
    Bai, Yuying
    Ma, Miaomiao
    Su, Xin
    CSEE JOURNAL OF POWER AND ENERGY SYSTEMS, 2025, 11 (01): : 227 - 242
  • [28] COMPARISON OF A BLACK-BOX MODEL TO A TRADITIONAL NUMERICAL MODEL FOR HYDRAULIC HEAD PREDICTION
    Tapoglou, E.
    Chatzakis, A.
    Karatzas, G. P.
    GLOBAL NEST JOURNAL, 2016, 18 (04): : 761 - 770
  • [29] MAP: A Model-agnostic Pretraining Framework for Click-through Rate Prediction
    Lin, Jianghao
    Qu, Yanru
    Guo, Wei
    Dai, Xinyi
    Tang, Ruiming
    Yu, Yong
    Zhang, Weinan
    PROCEEDINGS OF THE 29TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2023, 2023, : 1384 - 1395
  • [30] Prediction Exposes Your Face: Black-Box Model Inversion via Prediction Alignment
    Liu, Yufan
    Zhang, Wanqian
    Wu, Dayan
    Lin, Zheng
    Gu, Jingzi
    Wang, Weiping
    COMPUTER VISION - ECCV 2024, PT XXXVI, 2025, 15094 : 288 - 306