Zero-shot Personality Perception From Facial Images

被引:0
|
作者
Gan, Peter Zhuowei [1 ]
Sowmya, Arcot [1 ]
Mohammadi, Gelareh [1 ]
机构
[1] Univ New South Wales, Sydney, NSW, Australia
来源
AI 2022: ADVANCES IN ARTIFICIAL INTELLIGENCE | 2022年 / 13728卷
关键词
Personality; Personality perception; Personality computing; Data-driven approach; Computational modeling; Transfer learning; BIG; 5; IMPRESSIONS; MODEL;
D O I
10.1007/978-3-031-22695-3_4
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Personality perception is an important process that affects our behaviours towards others, with applications across many domains. Automatic personality perception (APP) tools can help create more natural interactions between humans and machines, and better understand human-human interactions. However, collecting personality assessments is a costly and tedious task. This paper presents a new method for zero-shot facial image personality perception tasks. Harnessing the latent psychometric layer of CLIP (Contrastive Language-Image Pre-training), the proposed PsyCLIP is the first zero-shot personality perception model achieving competitive results, compared to state-of-the-art supervised models. With PsyCLIP, we establish the existence of latent psychometric information in CLIP and demonstrate its use in the domain of personality computing. For evaluation, we compiled a new personality dataset consisting of 41800 facial images of various individuals labelled with their corresponding perceived Myers Briggs Type Indicator (MBTI) types. PsyCLIP achieved statistically significant results (p<0.01) in predicting all four Myers Briggs dimensions without requiring any training dataset.
引用
收藏
页码:43 / 56
页数:14
相关论文
共 50 条
  • [31] Zero-Shot Learning Based on Deep Weighted Attribute Prediction
    Wang, Xuesong
    Chen, Chen
    Cheng, Yuhu
    Chen, Xun
    Liu, Yu
    IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 2020, 50 (08): : 2948 - 2957
  • [32] Feature Selection Methods for Zero-Shot Learning of Neural Activity
    Caceres, Carlos A.
    Roos, Matthew J.
    Rupp, Kyle M.
    Milsap, Griffin
    Crone, Nathan E.
    Wolmetz, Michael E.
    Ratto, Christopher R.
    FRONTIERS IN NEUROINFORMATICS, 2017, 11
  • [33] A Zero-Shot Learning Approach to Classifying Requirements: A Preliminary Study
    Alhoshan, Waad
    Zhao, Liping
    Ferrari, Alessio
    Letsholo, Keletso J.
    REQUIREMENTS ENGINEERING: FOUNDATION FOR SOFTWARE QUALITY, REFSQ 2022, 2022, 13216 : 52 - 59
  • [34] Zero-shot motor health monitoring by blind domain transition
    Kiranyaz, Serkan
    Devecioglu, Ozer Can
    Alhams, Amir
    Sassi, Sadok
    Ince, Turker
    Abdeljaber, Osama
    Avci, Onur
    Gabbouj, Moncef
    MECHANICAL SYSTEMS AND SIGNAL PROCESSING, 2024, 210
  • [35] Low Emission Building Control with Zero-Shot Reinforcement Learning
    Jeen, Scott
    Abate, Alessandro
    Cullen, Jonathan M.
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 12, 2023, : 14259 - 14267
  • [36] Research Progress of Zero-Shot Learning Beyond Computer Vision
    Cao, Weipeng
    Zhou, Cong
    Wu, Yuhao
    Ming, Zhong
    Xu, Zhiwu
    Zhang, Jiyong
    ALGORITHMS AND ARCHITECTURES FOR PARALLEL PROCESSING, ICA3PP 2020, PT II, 2020, 12453 : 538 - 551
  • [37] Inference guided feature generation for generalized zero-shot learning
    Han, Zongyan
    Fu, Zhenyong
    Li, Guangyu
    Yang, Jian
    NEUROCOMPUTING, 2021, 430 : 150 - 158
  • [38] Domain-aware Stacked AutoEncoders for zero-shot learning
    Song, Jianqiang
    Shi, Guangming
    Xie, Xuemei
    Wu, Qingtao
    Zhang, Mingchuan
    NEUROCOMPUTING, 2021, 429 : 118 - 131
  • [39] Hierarchical Coupled Discriminative Dictionary Learning for Zero-Shot Learning
    Li, Shuang
    Wang, Lichun
    Wang, Shaofan
    Kong, Dehui
    Yin, Baocai
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2023, 33 (09) : 4973 - 4984
  • [40] Semantic Consistent Embedding for Domain Adaptive Zero-Shot Learning
    Zhang, Jianyang
    Yang, Guowu
    Hu, Ping
    Lin, Guosheng
    Lv, Fengmao
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2023, 32 : 4024 - 4035