Semantic2Graph: graph-based multi-modal feature fusion for action segmentation in videos

被引:6
作者
Zhang, Junbin [1 ]
Tsai, Pei-Hsuan [2 ]
Tsai, Meng-Hsun [1 ,3 ]
机构
[1] Natl Cheng Kung Univ, Dept Comp Sci & Informat Engn, Tainan 701, Taiwan
[2] Natl Cheng Kung Univ, Inst Mfg Informat & Syst, Tainan 701, Taiwan
[3] Natl Yang Ming Chiao Tung Univ, Dept Comp Sci, Hsinchu 300, Taiwan
关键词
Video action segmentation; Graph neural networks; Computer vision; Semantic features; Multi-modal fusion; CONVOLUTIONAL NETWORK; LOCALIZATION; ATTENTION;
D O I
10.1007/s10489-023-05259-z
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Video action segmentation have been widely applied in many fields. Most previous studies employed video-based vision models for this purpose. However, they often rely on a large receptive field, LSTM or Transformer methods to capture long-term dependencies within videos, leading to significant computational resource requirements. To address this challenge, graph-based model was proposed. However, previous graph-based models are less accurate. Hence, this study introduces a graph-structured approach named Semantic2Graph, to model long-term dependencies in videos, thereby reducing computational costs and raise the accuracy. We construct a graph structure of video at the frame-level. Temporal edges are utilized to model the temporal relations and action order within videos. Additionally, we have designed positive and negative semantic edges, accompanied by corresponding edge weights, to capture both long-term and short-term semantic relationships in video actions. Node attributes encompass a rich set of multi-modal features extracted from video content, graph structures, and label text, encompassing visual, structural, and semantic cues. To synthesize this multi-modal information effectively, we employ a graph neural network (GNN) model to fuse multi-modal features for node action label classification. Experimental results demonstrate that Semantic2Graph outperforms state-of-the-art methods in terms of performance, particularly on benchmark datasets such as GTEA and 50Salads. Multiple ablation experiments further validate the effectiveness of semantic features in enhancing model performance. Notably, the inclusion of semantic edges in Semantic2Graph allows for the cost-effective capture of long-term dependencies, affirming its utility in addressing the challenges posed by computational resource constraints in video-based vision models.
引用
收藏
页码:2084 / 2099
页数:16
相关论文
共 40 条
  • [1] Refining Action Segmentation with Hierarchical Video Representations
    Ahn, Hyemin
    Lee, Dongheui
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 16282 - 16290
  • [2] Multimodal Machine Learning: A Survey and Taxonomy
    Baltrusaitis, Tadas
    Ahuja, Chaitanya
    Morency, Louis-Philippe
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2019, 41 (02) : 423 - 443
  • [3] Unified Fully and Timestamp Supervised Temporal Action Segmentation via Sequence to Sequence Translation
    Behrmann, Nadine
    Golestaneh, S. Alireza
    Kolter, Zico
    Gall, Jurgen
    Noroozi, Mehdi
    [J]. COMPUTER VISION - ECCV 2022, PT XXXV, 2022, 13695 : 52 - 68
  • [4] Boundary graph convolutional network for temporal action detection
    Chen, Yaosen
    Guo, Bing
    Shen, Yan
    Wang, Wei
    Lu, Weichen
    Suo, Xinhua
    [J]. IMAGE AND VISION COMPUTING, 2021, 109
  • [5] Learnable graph convolutional network and feature fusion for multi-view learning
    Chen, Zhaoliang
    Fu, Lele
    Yao, Jie
    Guo, Wenzhong
    Plant, Claudia
    Wang, Shiping
    [J]. INFORMATION FUSION, 2023, 95 : 109 - 119
  • [6] Laplacians and the Cheeger inequality for directed graphs
    Chung, Fan
    [J]. ANNALS OF COMBINATORICS, 2005, 9 (01) : 1 - 19
  • [7] Multi-feature fusion: Graph neural network and CNN combining for hyperspectral image classification
    Ding, Yao
    Zhang, Zhili
    Zhao, Xiaofeng
    Hong, Danfeng
    Cai, Wei
    Yu, Chengguo
    Yang, Nengjun
    Cai, Weiwei
    [J]. NEUROCOMPUTING, 2022, 501 : 246 - 257
  • [8] Dong Ze., 2022, Proceedings of the 2022 ACM Symposium on Spatial User Interaction, P1, DOI DOI 10.1109/ICME52920.2022.9859819
  • [9] Learning Semantic-Aware Spatial-Temporal Attention for Interpretable Action Recognition
    Fu, Jie
    Gao, Junyu
    Xu, Changsheng
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (08) : 5213 - 5224
  • [10] Bridging Video-text Retrieval with Multiple Choice Questions
    Ge, Yuying
    Ge, Yixiao
    Liu, Xihui
    Li, Dian
    Shan, Ying
    Qie, Xiaohu
    Luo, Ping
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 16146 - 16155