CCAN: Constraint Co-Attention Network for Instance Grasping

被引:0
|
作者
Cai, Junhao [1 ]
Tao, Xuefeng [1 ]
Cheng, Hui [1 ]
Zhang, Zhanpeng [2 ]
机构
[1] Sun Yat Sen Univ, Sch Data & Comp Sci, Guangzhou, Peoples R China
[2] Sensetime Grp Ltd, Shenzhen, Peoples R China
关键词
D O I
10.1109/icra40945.2020.9197182
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Instance grasping is a challenging robotic grasping task when a robot aims to grasp a specified target object in cluttered scenes. In this paper, we propose a novel end-to-end instance grasping method using only monocular workspace and query images, where the workspace image includes several objects and the query image only contains the target object. To effectively extract discriminative features and facilitate the training process, a learning-based method, referred to as Constraint Co-Attention Network (CCAN), is proposed which consists of a constraint co-attention module and a grasp affordance predictor. An effective co-attention module is presented to construct the features of a workspace image from the extracted features of the query image. By introducing soft constraints into the co-attention module, it highlights the target object's features while trivializes other objects' features in the workspace image. Using the features extracted from the co-attention module, the cascaded grasp affordance interpreter network only predicts the grasp configuration for the target object. The training of the CCAN is totally based on simulated self-supervision. Extensive qualitative and quantitative experiments show the effectiveness of our method both in simulated and real-world environments even for totally unseen objects.
引用
收藏
页码:8353 / 8359
页数:7
相关论文
共 50 条
  • [41] Entity-Aware Dual Co-Attention Network for Fake News Detection
    Yang, Sin-Han
    Chen, Chung-Chi
    Huang, Hen-Hsen
    Chen, Hsin-Hsi
    17TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EACL 2023, 2023, : 106 - 113
  • [42] Co-attention Propagation Network for Zero-Shot Video Object Segmentation
    Pei, Gensheng
    Yao, Yazhou
    Shen, Fumin
    Huang, Dan
    Huang, Xingguo
    Shen, Heng-Tao
    arXiv, 2023,
  • [43] Co-attention Based Feature Fusion Network for Spam Review Detection on Douban
    Huanyu Cai
    Ke Yu
    Yuhao Zhou
    Xiaofei Wu
    Neural Processing Letters, 2022, 54 : 5251 - 5271
  • [44] Meta-path Augmented Sequential Recommendation with Contextual Co-attention Network
    Huang, Xiaowen
    Qian, Shengsheng
    Fang, Quan
    Sang, Jitao
    Xu, Changsheng
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2020, 16 (02)
  • [45] A Feature Sparse Co-Attention Network for Visual internet of things (VIoT) sensing
    Dong, Feng
    Wang, Xiaofeng
    Oad, Ammar
    Talpur, Mir Sajjad Hussain
    Khuhro, Mansoor Ahmed
    Sun, Bowen
    COMPUTERS & ELECTRICAL ENGINEERING, 2022, 101
  • [46] Co-Attention Hierarchical Network: Generating Coherent Long Distractors for Reading Comprehension
    Zhou, Xiaorui
    Luo, Senlin
    Wu, Yunfang
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 9725 - 9732
  • [47] Multi-Modal Co-Attention Capsule Network for Fake News Detection
    Chunyan Yin
    Yongheng Chen
    Optical Memory and Neural Networks (Information Optics), 2024, 33 (01): : 13 - 27
  • [48] Topic enhanced sentiment co-attention BERT
    Wang, Shiyu
    Zhou, Gang
    Lu, Jicang
    Chen, Jing
    Xia, Yi
    JOURNAL OF INTELLIGENT INFORMATION SYSTEMS, 2023, 60 (01) : 175 - 197
  • [49] Topic enhanced sentiment co-attention BERT
    Shiyu Wang
    Gang Zhou
    Jicang Lu
    Jing Chen
    Yi Xia
    Journal of Intelligent Information Systems, 2023, 60 : 175 - 197
  • [50] Co-attention fusion based deep neural network for Chinese medical answer selection
    Xichen Chen
    Zuyuan Yang
    Naiyao Liang
    Zhenni Li
    Weijun Sun
    Applied Intelligence, 2021, 51 : 6633 - 6646