Dual-View Learning Based on Images and Sequences for Molecular Property Prediction

被引:1
|
作者
Zhang, Xiang [1 ]
Xiang, Hongxin [1 ]
Yang, Xixi [1 ]
Dong, Jingxin [1 ]
Fu, Xiangzheng [1 ]
Zeng, Xiangxiang [1 ]
Chen, Haowen [1 ]
Li, Keqin [2 ]
机构
[1] Hunan Univ, Coll Comp Sci & Elect Engn, Changsha 410082, Peoples R China
[2] SUNY, Dept Comp Sci, New York, NY 12561 USA
关键词
Task analysis; Visualization; Feature extraction; Drugs; Head; Chemicals; Bioinformatics; Drug design and development; images and SMILES strings; predict molecular properties; deep learning toolbox; DRUG-LIKENESS; DISCOVERY;
D O I
10.1109/JBHI.2023.3347794
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The prediction of molecular properties remains a challenging task in the field of drug design and development. Recently, there has been a growing interest in the analysis of biological images. Molecular images, as a novel representation, have proven to be competitive, yet they lack explicit information and detailed semantic richness. Conversely, semantic information in SMILES sequences is explicit but lacks spatial structural details. Therefore, in this study, we focus on and explore the relationship between these two types of representations, proposing a novel multimodal architecture named ISMol. ISMol relies on a cross-attention mechanism to extract information representations of molecules from both images and SMILES strings, thereby predicting molecular properties. Evaluation results on 14 small molecule ADMET datasets indicate that ISMol outperforms machine learning (ML) and deep learning (DL) models based on single-modal representations. In addition, we analyze our method through a large number of experiments to test the superiority, interpretability and generalizability of the method. In summary, ISMol offers a powerful deep learning toolbox for drug discovery in a variety of molecular properties.
引用
收藏
页码:1564 / 1574
页数:11
相关论文
共 50 条
  • [41] Dual-view co-contrastive learning for multi-behavior recommendation
    Qingfeng Li
    Huifang Ma
    Ruoyi Zhang
    Wangyu Jin
    Zhixin Li
    Applied Intelligence, 2023, 53 : 20134 - 20151
  • [42] Dual-View Deep Learning Model for Accurate Breast Cancer Detection in Mammograms
    Shah, Dilawar
    Khan, Mohammad Asmat Ullah
    Abrar, Mohammad
    Tahir, Muhammad
    INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, 2025, 2025 (01)
  • [43] DAN: Dual-View Representation Learning for Adapting Stance Classifiers to New Domains
    Xu, Chang
    Paris, Cecile
    Nepal, Surya
    Sparks, Ross
    Long, Chong
    Wang, Yafang
    ECAI 2020: 24TH EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, 325 : 2260 - 2267
  • [44] Dual-view co-contrastive learning for multi-behavior recommendation
    Li, Qingfeng
    Ma, Huifang
    Zhang, Ruoyi
    Jin, Wangyu
    Li, Zhixin
    APPLIED INTELLIGENCE, 2023, 53 (17) : 20134 - 20151
  • [45] A coastal obstacle detection framework of dual USVs based on dual-view color fusion
    He, Zehao
    Dai, Yongshou
    Li, Ligang
    Xu, Hongbin
    Jin, Jiucai
    Liu, Deqing
    SIGNAL IMAGE AND VIDEO PROCESSING, 2023, 17 (07) : 3883 - 3892
  • [46] A coastal obstacle detection framework of dual USVs based on dual-view color fusion
    Zehao He
    Yongshou Dai
    Ligang Li
    Hongbin Xu
    Jiucai Jin
    Deqing Liu
    Signal, Image and Video Processing, 2023, 17 : 3883 - 3892
  • [47] Laser Beam Pointing Control Based on Differential Dual-View Imaging
    Yang Hongwei
    Du Yimian
    Lu Lidong
    Tao Wei
    Lu Junguo
    Zhao Hui
    CHINESE JOURNAL OF LASERS-ZHONGGUO JIGUANG, 2021, 48 (17):
  • [48] Dual-view catadioptric panoramic system based on even aspheric elements
    Amani, Alireza
    Bai, Jian
    Huang, Xiao
    APPLIED OPTICS, 2020, 59 (25) : 7630 - 7637
  • [49] Dual-view integral imaging display based on point light sources
    Wu, Fei
    Liu, Ze-Sheng
    Yu, Jun-Sheng
    JOURNAL OF THE SOCIETY FOR INFORMATION DISPLAY, 2021, 29 (02) : 115 - 118
  • [50] Dual-View 3D Displays Based on Integral Imaging
    Wang, Qiong-Hua
    Deng, Huan
    Wu, Fei
    ADVANCES IN DISPLAY TECHNOLOGIES VI, 2016, 9770