Dual Convolutional LSTM Network for Referring Image Segmentation

被引:35
作者
Ye, Linwei [1 ]
Liu, Zhi [2 ,3 ]
Wang, Yang [1 ]
机构
[1] Univ Manitoba, Dept Comp Sci, Winnipeg, MB R3T 2N2, Canada
[2] Shanghai Univ, Shanghai Inst Adv Commun & Data Sci, Shanghai 200444, Peoples R China
[3] Shanghai Univ, Sch Commun & Informat Engn, Shanghai 200444, Peoples R China
基金
中国国家自然科学基金; 加拿大自然科学与工程研究理事会;
关键词
Image segmentation; Visualization; Decoding; Linguistics; Task analysis; Logic gates; Computer vision; Referring image segmentation; encoder-decoder; vision and language; deep learning;
D O I
10.1109/TMM.2020.2971171
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We consider referring image segmentation. It is a problem at the intersection of computer vision and natural language understanding. Given an input image and a referring expression in the form of a natural language sentence, the goal is to segment the object of interest in the image referred by the linguistic query. To this end, we propose a dual convolutional LSTM (ConvLSTM) network to tackle this problem. Our model consists of an encoder network and a decoder network, where ConvLSTM is used in both encoder and decoder networks to capture spatial and sequential information. The encoder network extracts visual and linguistic features for each word in the expression sentence, and adopts an attention mechanism to focus on words that are more informative in the multimodal interaction. The decoder network integrates the features generated by the encoder network at multiple levels as its input and produces the final precise segmentation mask. Experimental results on four challenging datasets demonstrate that the proposed network achieves superior segmentation performance compared with other state-of-the-art methods.
引用
收藏
页码:3224 / 3235
页数:12
相关论文
共 50 条
  • [21] Exploring Fine-Grained Image-Text Alignment for Referring Remote Sensing Image Segmentation
    Lei, Sen
    Xiao, Xinyu
    Zhang, Tianlin
    Li, Heng-Chao
    Shi, Zhenwei
    Zhu, Qing
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2025, 63
  • [22] A 3D CNN-LSTM-Based Image-to-Image Foreground Segmentation
    Akilan, Thangarajah
    Wu, Qingming Jonathan
    Safaei, Amin
    Huo, Jie
    Yang, Yimin
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2020, 21 (03) : 959 - 971
  • [23] RRSIS: Referring Remote Sensing Image Segmentation
    Yuan, Zhenghang
    Mou, Lichao
    Hua, Yuansheng
    Zhu, Xiao Xiang
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2024, 62 : 1 - 12
  • [24] Referring Image Segmentation by Generative Adversarial Learning
    Qiu, Shuang
    Zhao, Yao
    Jiao, Jianbo
    Wei, Yunchao
    Wei, Shikui
    IEEE TRANSACTIONS ON MULTIMEDIA, 2020, 22 (05) : 1333 - 1344
  • [25] Image segmentation in marine environments using convolutional LSTM for temporal context
    Hansen, Kasper Foss
    Yao, Linghong
    Ren, Kang
    Wang, Sen
    Liu, Wenwen
    Liu, Yuanchang
    APPLIED OCEAN RESEARCH, 2023, 139
  • [26] Brain Image Segmentation for Ultrascale Neuron Reconstruction via an Adaptive Dual-Task Learning Network
    Liu, Min
    Wu, Shuhan
    Chen, Runze
    Lin, Zhuangdian
    Wang, Yaonan
    Meijering, Erik
    IEEE TRANSACTIONS ON MEDICAL IMAGING, 2024, 43 (07) : 2574 - 2586
  • [27] Retinal Vessel Image Segmentation Based on Improved Convolutional Neural Network
    Wu Chenyue
    Yi Benshun
    Zhang Yungang
    Huang Song
    Feng Yu
    ACTA OPTICA SINICA, 2018, 38 (11)
  • [28] Multiple Relational Learning Network for Joint Referring Expression Comprehension and Segmentation
    Hua, Guoguang
    Liao, Muxin
    Tian, Shishun
    Zhang, Yuhang
    Zou, Wenbin
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 8805 - 8816
  • [29] A survey of methods for addressing the challenges of referring image segmentation
    Ji, Lixia
    Du, Yunlong
    Dang, Yiping
    Gao, Wenzhao
    Zhang, Han
    NEUROCOMPUTING, 2024, 583
  • [30] Learning From Box Annotations for Referring Image Segmentation
    Feng, Guang
    Zhang, Lihe
    Hu, Zhiwei
    Lu, Huchuan
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (03) : 3927 - 3937