Counterfactual Zero-Shot and Open-Set Visual Recognition

被引:150
作者
Yue, Zhongqi [1 ,3 ]
Wang, Tan [1 ]
Sun, Qianru [2 ]
Hua, Xian-Sheng [3 ]
Zhang, Hanwang [1 ]
机构
[1] Nanyang Technol Univ, Singapore, Singapore
[2] Singapore Management Univ, Singapore, Singapore
[3] Alibaba Grp, Damo Acad, Hangzhou, Peoples R China
来源
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021 | 2021年
关键词
D O I
10.1109/CVPR46437.2021.01515
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We present a novel counterfactual framework for both Zero-Shot Learning (ZSL) and Open-Set Recognition (OSR), whose common challenge is generalizing to the unseen-classes by only training on the seen-classes. Our idea stems from the observation that the generated samples for unseen-classes are often out of the true distribution, which causes severe recognition rate imbalance between the seen-class (high) and unseen-class (low). We show that the key reason is that the generation is not Counterfactual Faithful, and thus we propose a faithful one, whose generation is from the sample-specific counterfactual question: What would the sample look like, if we set its class attribute to a certain class, while keeping its sample attribute unchanged? Thanks to the faithfulness, we can apply the Consistency Rule to perform unseen/seen binary classification, by asking: Would its counterfactual still look like itself? If "yes", the sample is from a certain class, and "no" otherwise. Through extensive experiments on ZSL and OSR, we demonstrate that our framework effectively mitigates the seen/unseen imbalance and hence significantly improves the overall performance. Note that this framework is orthogonal to existing methods, thus, it can serve as a new baseline to evaluate how ZSL/OSR models generalize.
引用
收藏
页码:15399 / 15409
页数:11
相关论文
共 75 条
[1]  
Adhikari S, 2019, EMERGING AND NANOMATERIAL CONTAMINANTS IN WASTEWATER: ADVANCED TREATMENT TECHNOLOGIES, P3, DOI 10.1016/B978-0-12-814673-6.00001-2
[2]  
[Anonymous], 2016, ICML
[3]  
[Anonymous], 2020, CVPR, DOI DOI 10.1109/CVPR42600.2020.00417
[4]  
[Anonymous], 2019, CVPR, DOI DOI 10.1109/CVPR.2019.01112
[5]  
[Anonymous], 2017, PMLR
[6]  
[Anonymous], 2009, PROC CVPR IEEE
[7]  
[Anonymous], 2019, CVPR, DOI DOI 10.1109/CVPR.2019.00961
[8]  
[Anonymous], 2019, ICML
[9]  
[Anonymous], TPAMI
[10]  
Arjovsky M., 2019, arXiv preprint arXiv:1907.02893