A Hierarchical Graph-Based Approach for Recognition and Description Generation of Bimanual Actions in Videos

被引:0
作者
Ziaeetabar, Fatemeh [1 ]
Tamosiunaite, Minija [2 ,3 ]
Woergoetter, Florentin [3 ]
机构
[1] Univ Tehran, Coll Sci, Sch Math Stat & Comp Sci, Dept Comp Sci, Tehran 1417935840, Iran
[2] Vytautas Magnus Univ, Fac Informat, LT-44248 Kaunas, Lithuania
[3] Georg August Univ Gottingen, Phys Inst Biophys 3, Bernstein Ctr Computat Neurosci, Dept Computat Neurosci, D-37073 Gottingen, Germany
来源
IEEE ACCESS | 2024年 / 12卷
关键词
Videos; Visualization; Robot kinematics; Accuracy; Transformers; Context modeling; Attention mechanisms; Deep learning; Semantics; Feature extraction; Bimanual action recognition; manipulation actions; graph-based modeling; hierarchical attention mechanisms; video action analysis; hand-object interactions; NETWORKS;
D O I
10.1109/ACCESS.2024.3509674
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Nuanced understanding and the generation of detailed descriptive content for (bimanual) manipulation actions in videos is important for disciplines such as robotics, human-computer interaction, and video content analysis. This study describes a novel method, integrating graph-based modeling with layered hierarchical attention mechanisms, resulting in higher precision and better comprehensiveness of video descriptions. To achieve this, we encode, first, the spatio-temporal interdependencies between objects and actions with scene graphs and we combine this, in a second step, with a novel 3-level architecture creating a hierarchical attention mechanism using Graph Attention Networks (GATs). The 3-level GAT architecture allows recognizing local, but also global contextual elements. This way several descriptions with different semantic complexity can be generated in parallel for the same video clip, enhancing the discriminative accuracy of action recognition and action description. The performance of our approach is empirically tested using several 2D and 3D datasets. By comparing our method to the state of the art we consistently obtain better performance concerning accuracy, precision, and contextual relevance when evaluating action recognition as well as description generation. In a large set of ablation experiments we also assess the role of the different components of our model. With our multi-level approach the system obtains different semantic description depths, often observed in descriptions made by different people, too. Furthermore, better insight into bimanual hand-object interactions as achieved by our model may portend advancements in the field of robotics, enabling the emulation of intricate human actions with heightened precision.
引用
收藏
页码:180328 / 180360
页数:33
相关论文
共 106 条
[1]   Video scene analysis: an overview and challenges on deep learning algorithms [J].
Abbas, Qaisar ;
Ibrahim, Mostafa E. A. ;
Jaffar, M. Arfan .
MULTIMEDIA TOOLS AND APPLICATIONS, 2018, 77 (16) :20415-20453
[2]  
Airin Afsana, 2022, 2022 14th International Conference on Software, Knowledge, Information Management and Applications (SKIMA), P210, DOI 10.1109/SKIMA57145.2022.10029570
[3]   Learning the semantics of object-action relations by observation [J].
Aksoy, Eren Erdal ;
Abramov, Alexey ;
Doerr, Johannes ;
Ning, Kejun ;
Dellen, Babette ;
Woergoetter, Florentin .
INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2011, 30 (10) :1229-1249
[4]   Categorizing Object-Action Relations from Semantic Scene Graphs [J].
Aksoy, Eren Erdal ;
Abramov, Alexey ;
Woergoetter, Florentin ;
Dellen, Babette .
2010 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2010, :398-405
[5]   Human activity recognition using temporal convolutional neural network architecture [J].
Andrade-Ambriz, Yair A. ;
Ledesma, Sergio ;
Ibarra-Manzano, Mario-Alberto ;
Oros-Flores, Marvella, I ;
Almanza-Ojeda, Dora-Luz .
EXPERT SYSTEMS WITH APPLICATIONS, 2022, 191
[6]   Human Action Recognition using 3D Convolutional Neural Networks with 3D Motion Cuboids in Surveillance Videos [J].
Arunnehru, J. ;
Chamundeeswari, G. ;
Bharathi, S. Prasanna .
INTERNATIONAL CONFERENCE ON ROBOTICS AND SMART MANUFACTURING (ROSMA2018), 2018, 133 :471-477
[7]  
Brown TB, 2020, Arxiv, DOI [arXiv:2005.14165, 10.48550/arXiv.2005.14165]
[8]   Trends and challenges in robot manipulation [J].
Billard, Aude ;
Kragic, Danica .
SCIENCE, 2019, 364 (6446) :1149-+
[9]  
Birmingham B., 2018, P 11 INT C NAT LANG, P146
[10]  
Bloom V., 2012, P 2012 IEEE COMP SOC, P7