Deep Convolutional Neural Networks for Multi-Instance Multi-Task Learning

被引:36
|
作者
Zeng, Tao [1 ]
Ji, Shuiwang [1 ]
机构
[1] Washington State Univ, Sch Elect Engn & Comp Sci, Pullman, WA 99164 USA
关键词
Deep learning; multi-instance learning; multi-task learning; transfer learning; bioinformatics; ANNOTATION;
D O I
10.1109/ICDM.2015.92
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multi-instance learning studies problems in which labels are assigned to bags that contain multiple instances. In these settings, the relations between instances and labels are usually ambiguous. In contrast, multi-task learning focuses on the output space in which an input sample is associated with multiple labels. In real world, a sample may be associated with multiple labels that are derived from observing multiple aspects of the problem. Thus many real world applications are naturally formulated as multi-instance multi-task (MIMT) problems. A common approach to MIMT is to solve it task-by-task independently under the multi-instance learning framework. On the other hand, convolutional neural networks (CNN) have demonstrated promising performance in single-instance single-label image classification tasks. However, how CNN deals with multi-instance multi-label tasks still remains an open problem. This is mainly due to the complex multiple-to-multiple relations between the input and output space. In this work, we propose a deep leaning model, known as multi-instance multi-task convolutional neural networks (MIMT-CNN), where a number of images representing a multi-task problem is taken as the inputs. Then a shared sub-CNN is connected with each input image to form instance representations. Those sub-CNN outputs are subsequently aggregated as inputs to additional convolutional layers and full connection layers to produce the ultimate multi-label predictions. This CNN model, through transfer learning from other domains, enables transfer of prior knowledge at image level learned from large single-label single-task data sets. The bag level representations in this model are hierarchically abstracted by multiple layers from instance level representations. Experimental results on mouse brain gene expression pattern annotation data show that the proposed MIMT-CNN model achieves superior performance.
引用
收藏
页码:579 / 588
页数:10
相关论文
共 50 条
  • [41] Multi-instance learning of graph neural networks for aqueous pKa prediction
    Xiong, Jiacheng
    Li, Zhaojun
    Wang, Guangchao
    Fu, Zunyun
    Zhong, Feisheng
    Xu, Tingyang
    Liu, Xiaomeng
    Huang, Ziming
    Liu, Xiaohong
    Chen, Kaixian
    Jiang, Hualiang
    Zheng, Mingyue
    BIOINFORMATICS, 2022, 38 (03) : 792 - 798
  • [42] Optical multi-task learning using multi-wavelength diffractive deep neural networks
    Duan, Zhengyang
    Chen, Hang
    Lin, Xing
    NANOPHOTONICS, 2023, 12 (05) : 893 - 903
  • [43] Heterogeneous Multi-task Learning for Human Pose Estimation with Deep Convolutional Neural Network
    Li, Sijin
    Liu, Zhi-Qiang
    Chan, Antoni B.
    2014 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), 2014, : 488 - +
  • [44] Heterogeneous Multi-task Learning for Human Pose Estimation with Deep Convolutional Neural Network
    Li, Sijin
    Liu, Zhi-Qiang
    Chan, Antoni B.
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2015, 113 (01) : 19 - 36
  • [45] Heterogeneous Multi-task Learning for Human Pose Estimation with Deep Convolutional Neural Network
    Sijin Li
    Zhi-Qiang Liu
    Antoni B. Chan
    International Journal of Computer Vision, 2015, 113 : 19 - 36
  • [46] MULTI-TASK DEEP NEURAL NETWORK FOR MULTI-LABEL LEARNING
    Huang, Yan
    Wang, Wei
    Wang, Liang
    Tan, Tieniu
    2013 20TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP 2013), 2013, : 2897 - 2900
  • [47] Adversarial Multi-task Learning of Deep Neural Networks for Robust Speech Recognition
    Shinohara, Yusuke
    17TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2016), VOLS 1-5: UNDERSTANDING SPEECH PROCESSING IN HUMANS AND MACHINES, 2016, : 2369 - 2372
  • [48] Multi-Task Federated Learning for Personalised Deep Neural Networks in Edge Computing
    Mills, Jed
    Hu, Jia
    Min, Geyong
    IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2022, 33 (03) : 630 - 641
  • [49] A multi-task learning approach based on convolutional neural networks for image aesthetic evaluation
    Soydaner, Derya
    Wagemans, Johan
    PERCEPTION, 2022, 51 : 168 - 168
  • [50] Explainable multi-instance and multi-task learning for COVID-19 diagnosis and lesion segmentation in CT images
    Li, Minglei
    Li, Xiang
    Jiang, Yuchen
    Zhang, Jiusi
    Luo, Hao
    Yin, Shen
    KNOWLEDGE-BASED SYSTEMS, 2022, 252