共 50 条
Learning Individual Styles of Conversational Gesture
被引:182
|作者:
Ginosar, Shiry
[1
]
Bar, Amir
[2
]
Kohavi, Gefen
[1
]
Chan, Caroline
[3
]
Owens, Andrew
[1
]
Malik, Jitendra
[1
]
机构:
[1] Univ Calif Berkeley, Berkeley, CA 94720 USA
[2] Zebra Med Vis, Kibutz Shfayim, Israel
[3] MIT, Cambridge, MA 02139 USA
来源:
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019)
|
2019年
关键词:
SPEECH;
D O I:
10.1109/CVPR.2019.00361
中图分类号:
TP18 [人工智能理论];
学科分类号:
081104 ;
0812 ;
0835 ;
1405 ;
摘要:
Human speech is often accompanied by hand and arm gestures. We present a method for cross-modal translation from "in-the-wild" monologue speech of a single speaker to their conversational gesture motion. We train on unlabeled videos for which we only have noisy pseudo ground truth from an automatic pose detection system. Our proposed model significantly outperforms baseline methods in a quantitative comparison. To support research toward obtaining a computational understanding of the relationship between gesture and speech, we release a large video dataset of person-specific gestures.
引用
收藏
页码:3492 / 3501
页数:10
相关论文