Cross-Modal Graph Contrastive Learning with Cellular Images

被引:0
|
作者
Zheng, Shuangjia [1 ]
Rao, Jiahua [2 ]
Zhang, Jixian [3 ]
Zhou, Lianyu [4 ]
Xie, Jiancong [2 ]
Cohen, Ethan [5 ]
Lu, Wei [3 ]
Li, Chengtao [3 ]
Yang, Yuedong [2 ]
机构
[1] Shanghai Jiao Tong Univ, Global Inst Future Technol, Shanghai 200240, Peoples R China
[2] Sun Yat Sen Univ, Sch Comp Sci & Engn, Guangzhou 510000, Peoples R China
[3] Galixir Technol, Shanghai 200100, Peoples R China
[4] Xiamen Univ, Sch Informat, Xiamen 361005, Peoples R China
[5] Ecole Normale Super, PSL Res Inst, IBENS, Paris, France
基金
中国国家自然科学基金; 国家重点研发计划;
关键词
cellular image; cross-modal learning; drug discovery; graph neural networks; self-supervised learning;
D O I
10.1002/advs.202404845
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Constructing discriminative representations of molecules lies at the core of a number of domains such as drug discovery, chemistry, and medicine. State-of-the-art methods employ graph neural networks and self-supervised learning (SSL) to learn unlabeled data for structural representations, which can then be fine-tuned for downstream tasks. Albeit powerful, these methods are pre-trained solely on molecular structures and thus often struggle with tasks involved in intricate biological processes. Here, it is proposed to assist the learning of molecular representation by using the perturbed high-content cell microscopy images at the phenotypic level. To incorporate the cross-modal pre-training, a unified framework is constructed to align them through multiple types of contrastive loss functions, which is proven effective in the formulated novel tasks to retrieve the molecules and corresponding images mutually. More importantly, the model can infer functional molecules according to cellular images generated by genetic perturbations. In parallel, the proposed model can transfer non-trivially to molecular property predictions, and has shown great improvement over clinical outcome predictions. These results suggest that such cross-modality learning can bridge molecules and phenotype to play important roles in drug discovery. This study introduces a novel approach to enhance molecular representation learning by integrating high-content cell microscopy images at the phenotypic level. The proposed unified framework employs contrastive loss functions for cross-modal pre-training, enabling mutual retrieval of molecules and images. The model improves not only molecular properties predictions but also clinical outcome predictions, highlighting the potential of cross-modality learning in bridging molecular structures to phenotypes for drug discovery. image
引用
收藏
页数:11
相关论文
共 50 条
  • [1] Cross-modal Knowledge Graph Contrastive Learning for Machine Learning Method Recommendation
    Cao, Xianshuai
    Shi, Yuliang
    Wang, Jihu
    Yu, Han
    Wang, Xinjun
    Yan, Zhongmin
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 3694 - 3702
  • [2] Graph Information Interaction on Feature and Structure via Cross-modal Contrastive Learning
    Wen, Jinyong
    Wang, Yuhu
    Zhang, Chunxia
    Xiang, Shiming
    Pan, Chunhong
    2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 1068 - 1073
  • [3] Cross-Modal Contrastive Learning for Code Search
    Shi, Zejian
    Xiong, Yun
    Zhang, Xiaolong
    Zhang, Yao
    Li, Shanshan
    Zhu, Yangyong
    2022 IEEE INTERNATIONAL CONFERENCE ON SOFTWARE MAINTENANCE AND EVOLUTION (ICSME 2022), 2022, : 94 - 105
  • [4] Cross-modal Contrastive Learning for Speech Translation
    Ye, Rong
    Wang, Mingxuan
    Li, Lei
    NAACL 2022: THE 2022 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES, 2022, : 5099 - 5113
  • [5] Cross-modal contrastive learning for multimodal sentiment recognition
    Yang, Shanliang
    Cui, Lichao
    Wang, Lei
    Wang, Tao
    APPLIED INTELLIGENCE, 2024, 54 (05) : 4260 - 4276
  • [6] Cross-modal contrastive learning for multimodal sentiment recognition
    Shanliang Yang
    Lichao Cui
    Lei Wang
    Tao Wang
    Applied Intelligence, 2024, 54 : 4260 - 4276
  • [7] TRAJCROSS: Trajecotry Cross-Modal Retrieval with Contrastive Learning
    Jing, Quanliang
    Yao, Di
    Gong, Chang
    Fan, Xinxin
    Wang, Baoli
    Tan, Haining
    Bi, Jingping
    2021 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2021, : 344 - 349
  • [8] Multimodal Graph Learning for Cross-Modal Retrieval
    Xie, Jingyou
    Zhao, Zishuo
    Lin, Zhenzhou
    Shen, Ying
    PROCEEDINGS OF THE 2023 SIAM INTERNATIONAL CONFERENCE ON DATA MINING, SDM, 2023, : 145 - 153
  • [9] Cross-modal Metric Learning with Graph Embedding
    Zhang, Youcai
    Gu, Xiaodong
    2018 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2018, : 758 - 764
  • [10] A Classification Method for the Cellular Images Based on Active Learning and Cross-Modal Transfer Learning
    Vununu, Caleb
    Lee, Suk-Hwan
    Kwon, Ki-Ryong
    SENSORS, 2021, 21 (04) : 1 - 24