QUB-PHEO: A Visual-Based Dyadic Multi-View Dataset for Intention Inference in Collaborative Assembly

被引:0
|
作者
Adebayo, Samuel [1 ,2 ]
Mcloone, Sean [1 ,2 ]
Dessing, Joost C. [1 ,3 ]
机构
[1] Queens Univ Belfast, Ctr Intelligent Autonomous Mfg Syst, Belfast BT7 1NN, North Ireland
[2] Queens Univ Belfast, Sch Elect Elect Engn & Comp Sci, Belfast BT7 1NN, North Ireland
[3] Queens Univ Belfast, Sch Psychol, Belfast BT7 1NN, North Ireland
来源
IEEE ACCESS | 2024年 / 12卷
关键词
Human-robot interaction; Robots; Assembly; Visualization; Complexity theory; Encoding; Collaboration; Annotations; Service robots; Inference algorithms; Computer vision; dyadic interaction; multi-cue dataset; multi-view dataset; computer vision; task-oriented interaction; DATABASE; GAZE;
D O I
10.1109/ACCESS.2024.3485162
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
QUB-PHEO introduces a visual-based, dyadic dataset with the potential of advancing human-robot interaction (HRI) research in assembly operations and intention inference. This dataset captures rich multimodal interactions between two participants, one acting as a 'robot surrogate,' across a variety of assembly tasks that are further broken down into 36 distinct subtasks. With rich visual annotations-facial landmarks, gaze, hand movements, object localization, and more-for 70 participants, QUB-PHEO offers two versions: full video data for 50 participants and visual cues for all 70. Designed to improve machine learning models for HRI, QUB-PHEO enables deeper analysis of subtle interaction cues and intentions, promising contributions to the field.
引用
收藏
页码:157050 / 157066
页数:17
相关论文
empty
未找到相关数据