Learning Individual Styles of Conversational Gesture

被引:213
作者
Ginosar, Shiry [1 ]
Bar, Amir [2 ]
Kohavi, Gefen [1 ]
Chan, Caroline [3 ]
Owens, Andrew [1 ]
Malik, Jitendra [1 ]
机构
[1] Univ Calif Berkeley, Berkeley, CA 94720 USA
[2] Zebra Med Vis, Kibutz Shfayim, Israel
[3] MIT, Cambridge, MA 02139 USA
来源
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019) | 2019年
关键词
SPEECH;
D O I
10.1109/CVPR.2019.00361
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Human speech is often accompanied by hand and arm gestures. We present a method for cross-modal translation from "in-the-wild" monologue speech of a single speaker to their conversational gesture motion. We train on unlabeled videos for which we only have noisy pseudo ground truth from an automatic pose detection system. Our proposed model significantly outperforms baseline methods in a quantitative comparison. To support research toward obtaining a computational understanding of the relationship between gesture and speech, we release a large video dataset of person-specific gestures.
引用
收藏
页码:3492 / 3501
页数:10
相关论文
共 42 条
[1]  
[Anonymous], 2008, 7 INT JOINT C AUTONO, V1, P151
[2]  
Aytar Y, 2016, ADV NEUR IN, V29
[3]  
Bregler C., 1997, P 24 ANN C COMP GRAP, V31, P353, DOI DOI 10.1145/258734.258880
[4]  
Buehler P, 2009, PROC CVPR IEEE, P2953, DOI 10.1109/CVPRW.2009.5206523
[5]  
BUTTERWORTH B, 1989, PSYCHOL REV, V96, P168, DOI 10.1037/0033-295X.96.1.168
[6]  
Camgoz N. Cihan, 2018, COMPUTER VISION PATT
[7]   Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields [J].
Cao, Zhe ;
Simon, Tomas ;
Wei, Shih-En ;
Sheikh, Yaser .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :1302-1310
[8]  
Cassell J, 2001, COMP GRAPH, P477, DOI 10.1145/383259.383315
[9]  
Cassell J, 1994, P 21 ANN C COMP GRAP, P413, DOI DOI 10.1145/192161.192272
[10]  
Cassell J., 2000, Embodied Conversational Agents