Automatic Deceit Detection Through Multimodal Analysis of High-Stake Court-Trials

被引:2
作者
Bicer, Berat [1 ]
Dibeklioglu, Hamdi [1 ]
机构
[1] Bilkent Univ, Dept Comp Engn, TR-06800 Ankara, Turkiye
关键词
Feature extraction; Psychology; Visualization; Task analysis; Transformers; Computational modeling; Transfer learning; Affective computing; automatic deceit detection; behavioral analysis; deep learning; multimodal data analysis; RECOGNITION; ATTENTION; CLASSIFICATION; INDICATORS; DECEPTION;
D O I
10.1109/TAFFC.2023.3322331
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this article we propose the use of convolutional self-attention for attention-based representation learning, while replacing traditional vectorization methods with a transformer as the backbone of our speech model for transfer learning within our automatic deceit detection framework. This design performs a multimodal data analysis and applies fusion to merge visual, vocal, and speech(textual) channels; reporting deceit predictions. Our experimental results show that the proposed architecture improves the state-of-the-art on the popular Real-Life Trial (RLT) dataset in terms of correct classification rate. To further assess the generalizability of our design, we experiment on the low-stakes Box of Lies (BoL) dataset and achieve state-of-the-art performance as well as providing cross-corpus comparisons. Following our analysis, we report that (1) convolutional self-attention learns meaningful representations while performing joint attention computation for deception, (2) apparent deceptive intent is a continuous function of time and subjects can display varying levels of apparent deceptive intent throughout recordings, and (3), in support of criminal psychology findings, studying abnormal behavior out of context can be an unreliable way to predict deceptive intent.
引用
收藏
页码:342 / 356
页数:15
相关论文
共 75 条
[1]  
Adelsom R., 2004, American Psychological Association, V35, P70
[2]  
Amos B., 2016, CMU School Comput. Sci
[3]  
Baltrusaitis T, 2016, IEEE WINT CONF APPL
[4]   OpenFace 2.0: Facial Behavior Analysis Toolkit [J].
Baltrusaitis, Tadas ;
Zadeh, Amir ;
Lim, Yao Chong ;
Morency, Louis-Philippe .
PROCEEDINGS 2018 13TH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE & GESTURE RECOGNITION (FG 2018), 2018, :59-66
[5]   Bridging the Gap Between Ethics and Practice: Guidelines for Reliable, Safe, and Trustworthy Human-centered AI Systems [J].
Ben Shneiderman .
ACM TRANSACTIONS ON INTERACTIVE INTELLIGENT SYSTEMS, 2020, 10 (04)
[6]  
Cheng JP, 2016, Arxiv, DOI [arXiv:1601.06733, 10.18653/v1/D16-1053, DOI 10.18653/V1/D16-1053]
[7]   COMPARISON OF PARAMETRIC REPRESENTATIONS FOR MONOSYLLABIC WORD RECOGNITION IN CONTINUOUSLY SPOKEN SENTENCES [J].
DAVIS, SB ;
MERMELSTEIN, P .
IEEE TRANSACTIONS ON ACOUSTICS SPEECH AND SIGNAL PROCESSING, 1980, 28 (04) :357-366
[8]   Lying in everyday life [J].
DePaulo, BM ;
Kirkendol, SE ;
Kashy, DA ;
Wyer, MM ;
Epstein, JA .
JOURNAL OF PERSONALITY AND SOCIAL PSYCHOLOGY, 1996, 70 (05) :979-995
[9]  
Donahue J, 2015, PROC CVPR IEEE, P2625, DOI 10.1109/CVPR.2015.7298878
[10]   Thanks for Asking, but Let's Talk About Something Else: Reactions to Topic-Avoidance Messages That Feature Different Interaction Goals [J].
Donovan-Kicken, Erin ;
Guinn, Trey D. ;
Romo, Lynsey Kluever ;
Ciceraro, Lea D. L. .
COMMUNICATION RESEARCH, 2013, 40 (03) :308-336