Zero-shot Personality Perception From Facial Images

被引:0
|
作者
Gan, Peter Zhuowei [1 ]
Sowmya, Arcot [1 ]
Mohammadi, Gelareh [1 ]
机构
[1] Univ New South Wales, Sydney, NSW, Australia
来源
AI 2022: ADVANCES IN ARTIFICIAL INTELLIGENCE | 2022年 / 13728卷
关键词
Personality; Personality perception; Personality computing; Data-driven approach; Computational modeling; Transfer learning; BIG; 5; IMPRESSIONS; MODEL;
D O I
10.1007/978-3-031-22695-3_4
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Personality perception is an important process that affects our behaviours towards others, with applications across many domains. Automatic personality perception (APP) tools can help create more natural interactions between humans and machines, and better understand human-human interactions. However, collecting personality assessments is a costly and tedious task. This paper presents a new method for zero-shot facial image personality perception tasks. Harnessing the latent psychometric layer of CLIP (Contrastive Language-Image Pre-training), the proposed PsyCLIP is the first zero-shot personality perception model achieving competitive results, compared to state-of-the-art supervised models. With PsyCLIP, we establish the existence of latent psychometric information in CLIP and demonstrate its use in the domain of personality computing. For evaluation, we compiled a new personality dataset consisting of 41800 facial images of various individuals labelled with their corresponding perceived Myers Briggs Type Indicator (MBTI) types. PsyCLIP achieved statistically significant results (p<0.01) in predicting all four Myers Briggs dimensions without requiring any training dataset.
引用
收藏
页码:43 / 56
页数:14
相关论文
共 50 条
  • [21] Global Semantic Descriptors for Zero-Shot Action Recognition
    Estevam, Valter
    Laroca, Rayson
    Pedrini, Helio
    Menotti, David
    IEEE SIGNAL PROCESSING LETTERS, 2022, 29 : 1843 - 1847
  • [22] Zero-shot Domain Adaptation Based on Attribute Information
    Ishii, Masato
    Takenouchi, Takashi
    Sugiyama, Masashi
    ASIAN CONFERENCE ON MACHINE LEARNING, VOL 101, 2019, 101 : 473 - 488
  • [23] PRESENT: Zero-Shot Text-to-Prosody Control
    Lam, Perry
    Zhang, Huayun
    Chen, Nancy F.
    Sisman, Berrak
    Herremans, Dorien
    IEEE SIGNAL PROCESSING LETTERS, 2025, 32 : 776 - 780
  • [24] Zero-shot learning for requirements classification: An exploratory study
    Alhoshan, Waad
    Ferrari, Alessio
    Zhao, Liping
    INFORMATION AND SOFTWARE TECHNOLOGY, 2023, 159
  • [25] Orthogonal Temporal Interpolation for Zero-Shot Video Recognition
    Zhu, Yan
    Zhuo, Junbao
    Ma, Bin
    Geng, Jiajia
    Wei, Xiaoming
    Wei, Xiaolin
    Wang, Shuhui
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 7491 - 7501
  • [26] Zero-shot learning by mutual information estimation and maximization
    Tang, Chenwei
    Yang, Xue
    Lv, Jiancheng
    He, Zhenan
    KNOWLEDGE-BASED SYSTEMS, 2020, 194
  • [27] Region Semantically Aligned Network for Zero-Shot Learning
    Wang, Ziyang
    Gou, Yunhao
    Li, Jingjing
    Zhang, Yu
    Yang, Yang
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, CIKM 2021, 2021, : 2080 - 2090
  • [28] Class label autoencoder with structure refinement for zero-shot learning
    Lin, Guangfeng
    Fan, Caixia
    Chen, Wanjun
    Chen, Yajun
    Zhao, Fan
    NEUROCOMPUTING, 2021, 428 : 54 - 64
  • [29] JSE: Joint Semantic Encoder for zero-shot gesture learning
    Naveen Madapana
    Juan Wachs
    Pattern Analysis and Applications, 2022, 25 : 679 - 692
  • [30] Zero-Shot Object Recognition Using Semantic Label Vectors
    Naha, Shujon
    Wang, Yang
    2015 12TH CONFERENCE ON COMPUTER AND ROBOT VISION CRV 2015, 2015, : 94 - 100