LEARNING HIERARCHICAL SELF-ATTENTION FOR VIDEO SUMMARIZATION

被引:0
|
作者
Liu, Yen-Ting [1 ]
Li, Yu-Jhe [1 ]
Yang, Fu-En [1 ]
Chen, Shang-Fu [1 ]
Wang, Yu-Chiang Frank [1 ]
机构
[1] Natl Taiwan Univ, Grad Inst Commun Engn, Taipei, Taiwan
来源
2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP) | 2019年
关键词
Video Summarization; Hierarchical Structure; Attention Model; Deep Learning; Computer Vision;
D O I
10.1109/icip.2019.8803639
中图分类号
TB8 [摄影技术];
学科分类号
0804 ;
摘要
Video summarization still remains a challenging task. Due to sufficient video data on the Internet, such task draws significant attention in the vision community and benefits a wide range of applications, e.g., video retrieval, search, etc. To effectively perform video summarization by deriving the key-frames which represent the given input video, we propose a novel framework named Hierarchical Multi-Attention Network (H-MAN) which comprises the shot-level reconstruction model and multi-head attention model. While our designed attention model is two-stage hierarchical structure for producing various attention maps, we are among the first to utilize the multi-attention mechanism in the video summarization task, which brings improved performance. The quantitative and qualitative results demonstrate the effectiveness of our model, which performs favorably against state-of-the-art approaches.
引用
收藏
页码:3377 / 3381
页数:5
相关论文
共 50 条
  • [21] Homogeneous Learning: Self-Attention Decentralized Deep Learning
    Sun, Yuwei
    Ochiai, Hideya
    IEEE ACCESS, 2022, 10 : 7695 - 7703
  • [22] Self-attention with Functional Time Representation Learning
    Xu, Da
    Ruan, Chuanwei
    Kumar, Sushant
    Korpeoglu, Evren
    Achan, Kannan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [23] Compressed Self-Attention for Deep Metric Learning
    Chen, Ziye
    Gong, Mingming
    Xu, Yanwu
    Wang, Chaohui
    Zhang, Kun
    Du, Bo
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 3561 - 3568
  • [24] Route-Based Proactive Content Caching Using Self-Attention in Hierarchical Federated Learning
    Khanal, Subina
    Thar, Kyi
    Huh, Eui-Nam
    IEEE ACCESS, 2022, 10 : 29514 - 29527
  • [25] An abstractive text summarization technique using transformer model with self-attention mechanism
    Sandeep Kumar
    Arun Solanki
    Neural Computing and Applications, 2023, 35 : 18603 - 18622
  • [26] An abstractive text summarization technique using transformer model with self-attention mechanism
    Kumar, Sandeep
    Solanki, Arun
    NEURAL COMPUTING & APPLICATIONS, 2023, 35 (25): : 18603 - 18622
  • [27] Route-Based Proactive Content Caching Using Self-Attention in Hierarchical Federated Learning
    Khanal, Subina
    Thar, Kyi
    Huh, Eui-Nam
    IEEE Access, 2022, 10 : 29514 - 29527
  • [28] Self-Attention Networks Can Process Bounded Hierarchical Languages
    Yao, Shunyu
    Peng, Binghui
    Papadimitriou, Christos
    Narasimhan, Karthik
    59TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 11TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (ACL-IJCNLP 2021), VOL 1, 2021, : 3770 - 3785
  • [29] Hierarchical Self-Attention Hybrid Sparse Networks for Document Classification
    Huang, Weichun
    Tao, Ziqiang
    Huang, Xiaohui
    Xiong, Liyan
    Yu, Jia
    MATHEMATICAL PROBLEMS IN ENGINEERING, 2021, 2021
  • [30] Classifying cancer pathology reports with hierarchical self-attention networks
    Gao, Shang
    Qiu, John X.
    Alawad, Mohammed
    Hinkle, Jacob D.
    Schaefferkoetter, Noah
    Yoon, Hong-Jun
    Christian, Blair
    Fearn, Paul A.
    Penberthy, Lynne
    Wu, Xiao-Cheng
    Coyle, Linda
    Tourassi, Georgia
    Ramanathan, Arvind
    ARTIFICIAL INTELLIGENCE IN MEDICINE, 2019, 101