Independency-enhancing adversarial active learning

被引:0
|
作者
Guo, Jifeng [1 ]
Pang, Zhiqi [2 ]
Bai, Miaoyuan [2 ]
Xiao, Yanbang [2 ]
Zhang, Jian [3 ,4 ]
机构
[1] Guilin Univ Aerosp Technol, Guilin, Peoples R China
[2] Northeast Forestry Univ, Coll Informat & Comp Engn, Harbin, Peoples R China
[3] Wuxi Vocat Coll Sci & Technol, Sch Artificial Intelligence, Wuxi, Peoples R China
[4] Wuxi Vocat Coll Sci & Technol, Sch Artificial Intelligence, Wuxi 214000, Peoples R China
关键词
computer vision; image classification; image segmentation; CLASSIFICATION;
D O I
10.1049/ipr2.12724
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The core idea of active learning is to obtain higher model performance with less annotation cost. This paper proposes an independency-enhancing adversarial active learning method. Independency-enhancing adversarial active learning is different from the previous methods and pays more attention to sample independence. Specifically, it is believed that the informativeness of a group of samples is related to sample independence rather than the simple sum of the informativeness of each sample in the group. Therefore, an independent sample selection module based on hierarchical clustering is designed to ensure sample independence. An adversarial approach is used to learn the feature representation of a sample and use the predicted loss value to label the state of the sample. Finally, samples are selected according to the uncertainty of the samples, the diversity of the samples and the independence of the samples. The experimental results on four datasets (CIFAR-100, Caltech-101, Cityscapes and BDD100K) demonstrate the effectiveness and superiority of independency-enhancing adversarial active learning.
引用
收藏
页码:1427 / 1437
页数:11
相关论文
共 50 条
  • [21] Study of Adversarial Machine Learning for Enhancing Human Action Classification
    Edwards, DeMarcus
    Rawat, Danda B.
    Sadler, Brian M.
    2022 IEEE 23RD INTERNATIONAL CONFERENCE ON INFORMATION REUSE AND INTEGRATION FOR DATA SCIENCE (IRI 2022), 2022, : 75 - 80
  • [22] Enhancing Medication Event Classification with Syntax Parsing and Adversarial Learning
    Szanto, Zsolt
    Banati, Balazs
    Zombori, Tamas
    ARTIFICIAL INTELLIGENCE APPLICATIONS AND INNOVATIONS, AIAI 2023, PT I, 2023, 675 : 114 - 124
  • [23] Enhancing the Security of Deep Learning Steganography via Adversarial Examples
    Shang, Yueyun
    Jiang, Shunzhi
    Ye, Dengpan
    Huang, Jiaqing
    MATHEMATICS, 2020, 8 (09)
  • [24] Enhancing active learning in the student laboratory
    Modell, HI
    Michael, JA
    Adamson, T
    Horwitz, B
    ADVANCES IN PHYSIOLOGY EDUCATION, 2004, 28 (03) : 107 - 111
  • [25] Technological Devices for Enhancing Active Learning
    Juanes, Juan A.
    Ruisoto, Pablo
    SIXTH INTERNATIONAL CONFERENCE ON TECHNOLOGICAL ECOSYSTEMS FOR ENHANCING MULTICULTURALITY (TEEM'18), 2018, : 392 - 396
  • [26] Avoiding Lingering in Learning Active Recognition by Adversarial Disturbance
    Fan, Lei
    Wu, Ying
    2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2023, : 4601 - 4610
  • [27] SEAL: Semisupervised Adversarial Active Learning on Attributed Graphs
    Li, Yayong
    Yin, Jie
    Chen, Ling
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 32 (07) : 3136 - 3147
  • [28] Structural Semantic Adversarial Active Learning for Image Captioning
    Zhang, Beichen
    Li, Liang
    Su, Li
    Wang, Shuhui
    Deng, Jincan
    Zha, Zheng-Jun
    Huang, Qingming
    MM '20: PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, 2020, : 1112 - 1121
  • [29] Task-Aware Variational Adversarial Active Learning
    Kim, Kwanyoung
    Park, Dongwon
    Kim, Kwang In
    Chun, Se Young
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 8162 - 8171
  • [30] Domain Adversarial Active Learning for Domain Generalization Classification
    Chen, Jianting
    Ding, Ling
    Yang, Yunxiao
    Di, Zaiyuan
    Xiang, Yang
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2025, 37 (01) : 226 - 238