Multi-Task Consistency for Active Learning

被引:0
|
作者
Hekimoglu, Aral [1 ]
Friedrich, Philipp [2 ]
Zimmer, Walter [1 ]
Schmidt, Michael [2 ]
Marcos-Ramiro, Alvaro [2 ]
Knoll, Alois [1 ]
机构
[1] Techn Univ Munich, Munich, Germany
[2] BMW Grp, Munich, Germany
关键词
D O I
10.1109/ICCVW60793.2023.00366
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Learning-based solutions for vision tasks require a large amount of labeled training data to ensure their performance and reliability. In single-task vision-based settings, inconsistency-based active learning has proven to be effective in selecting informative samples for annotation. However, there is a lack of research exploiting the inconsistency between multiple tasks in multi-task networks. To address this gap, we propose a novel multi-task active learning strategy for two coupled vision tasks: object detection and semantic segmentation. Our approach leverages the inconsistency between them to identify informative samples across both tasks. We propose three constraints that specify how the tasks are coupled and introduce a method for determining the pixels belonging to the object detected by a bounding box, to later quantify the constraints as inconsistency scores. To evaluate the effectiveness of our approach, we establish multiple baselines for multi-task active learning and introduce a new metric, mean Detection Segmentation Quality (mDSQ), tailored for the multi-task active learning comparison that addresses the performance of both tasks. We conduct extensive experiments on the nuImages and A9 datasets, demonstrating that our approach out-performs existing state-of-the-art methods by up to 3.4% mDSQ on nuImages. Our approach achieves 95% of the fully-trained performance using only 67% of the available data, corresponding to 20% fewer labels compared to random selection and 5% fewer labels compared to state-of-the-art selection strategy. The code is available at https: //github.com/aralhekimoglu/BoxMask.
引用
收藏
页码:3407 / 3416
页数:10
相关论文
共 50 条
  • [41] Gradient Surgery for Multi-Task Learning
    Yu, Tianhe
    Kumar, Saurabh
    Gupta, Abhishek
    Levine, Sergey
    Hausman, Karol
    Finn, Chelsea
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [42] Fitting and sharing multi-task learning
    Piao, Chengkai
    Wei, Jinmao
    APPLIED INTELLIGENCE, 2024, 54 (9-10) : 6918 - 6929
  • [43] A brief review on multi-task learning
    Kim-Han Thung
    Chong-Yaw Wee
    Multimedia Tools and Applications, 2018, 77 : 29705 - 29725
  • [44] Multi-task reinforcement learning in humans
    Momchil S. Tomov
    Eric Schulz
    Samuel J. Gershman
    Nature Human Behaviour, 2021, 5 : 764 - 773
  • [45] Exploring Multi-Task Learning for Explainability
    Charalampakos, Foivos
    Koutsopoulos, Iordanis
    ARTIFICIAL INTELLIGENCE-ECAI 2023 INTERNATIONAL WORKSHOPS, PT 1, XAI3, TACTIFUL, XI-ML, SEDAMI, RAAIT, AI4S, HYDRA, AI4AI, 2023, 2024, 1947 : 349 - 365
  • [46] Convex multi-task feature learning
    Andreas Argyriou
    Theodoros Evgeniou
    Massimiliano Pontil
    Machine Learning, 2008, 73 : 243 - 272
  • [47] Manifold Regularized Multi-Task Learning
    Yang, Peipei
    Zhang, Xu-Yao
    Huang, Kaizhu
    Liu, Cheng-Lin
    NEURAL INFORMATION PROCESSING, ICONIP 2012, PT III, 2012, 7665 : 528 - 536
  • [48] Multi-task learning with deformable convolution
    Li, Jie
    Huang, Lei
    Wei, Zhiqiang
    Zhang, Wenfeng
    Qin, Qibing
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2021, 77
  • [49] Polymer informatics with multi-task learning
    Kuenneth, Christopher
    Rajan, Arunkumar Chitteth
    Tran, Huan
    Chen, Lihua
    Kim, Chiho
    Ramprasad, Rampi
    PATTERNS, 2021, 2 (04):
  • [50] Kernel Online Multi-task Learning
    Sumitra, S.
    Aravindh, A.
    COMPUTATIONAL INTELLIGENCE, CYBER SECURITY AND COMPUTATIONAL MODELS, ICC3 2015, 2016, 412 : 55 - 64