A Quantum Annealing Instance Selection Approach for Efficient and Effective Transformer Fine-Tuning

被引:0
|
作者
Pasin, Andrea [1 ]
Cunha, Washington [2 ]
Goncalves, Marcos Andre [2 ]
Ferro, Nicola [1 ]
机构
[1] Univ Padua, Padua, Italy
[2] Univ Fed Minas Gerais, Belo Horizonte, MG, Brazil
来源
PROCEEDINGS OF THE 2024 ACM SIGIR INTERNATIONAL CONFERENCE ON THE THEORY OF INFORMATION RETRIEVAL, ICTIR 2024 | 2024年
基金
巴西圣保罗研究基金会;
关键词
Instance Selection; Quantum Computing; Text Classification;
D O I
10.1145/3664190.3672515
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Deep Learning approaches have become pervasive in recent years due to their ability to solve complex tasks. However, these models need huge datasets for proper training and good generalization. This translates into high training and fine-tuning time, even several days for the most complex models and large datasets. In this work, we present a novel quantum Instance Selection (IS) approach that allows to significantly reduce the size of the training datasets (by up to 28%) while maintaining the model's effectiveness, thus promoting (training) speedups and scalability. Our solution is innovative in the sense that it exploits a different computing paradigm - Quantum Annealing (QA) - a specific Quantum Computing paradigm that can be used to tackle optimization problems. To the best of our knowledge, there have been no prior attempts to tackle the IS problem using QA. Furthermore, we propose a new Quadratic Unconstrained Binary Optimization formulation specific for the IS problem, which is a contribution in itself. Through an extensive set of experiments with several Text Classification benchmarks, we empirically demonstrate our quantum solution's feasibility and competitiveness with the current state-of-the-art IS solutions.
引用
收藏
页码:205 / 214
页数:10
相关论文
共 50 条
  • [21] Multi-phase Fine-Tuning: A New Fine-Tuning Approach for Sign Language Recognition
    Noha Sarhan
    Mikko Lauri
    Simone Frintrop
    KI - Künstliche Intelligenz, 2022, 36 : 91 - 98
  • [22] Quantum fine-tuning in stringy quintessence models
    Hertzberg, Mark R.
    Sandora, McCullen
    Trodden, Mark
    PHYSICS LETTERS B, 2019, 797
  • [23] Silicon quantum dots: fine-tuning to maturity
    Morello, Andrea
    NANOTECHNOLOGY, 2015, 26 (50)
  • [24] On the Effectiveness of Parameter-Efficient Fine-Tuning
    Fu, Zihao
    Yang, Haoran
    So, Anthony Man-Cho
    Lam, Wai
    Bing, Lidong
    Collier, Nigel
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 11, 2023, : 12799 - 12807
  • [25] PockEngine: Sparse and Efficient Fine-tuning in a Pocket
    Zhu, Ligeng
    Hu, Lanxiang
    Lin, Ji
    Wang, Wei-Chen
    Chen, Wei-Ming
    Gan, Chuang
    Han, Song
    56TH IEEE/ACM INTERNATIONAL SYMPOSIUM ON MICROARCHITECTURE, MICRO 2023, 2023, : 1381 - 1394
  • [26] Efficient Fine-Tuning of BERT Models on the Edge
    Vucetic, Danilo
    Tayaranian, Mohammadreza
    Ziaeefard, Maryam
    Clark, James J.
    Meyer, Brett H.
    Gross, Warren J.
    2022 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS 22), 2022, : 1838 - 1842
  • [27] An Efficient Approach for Instance Selection
    Carbonera, Joel Luis
    BIG DATA ANALYTICS AND KNOWLEDGE DISCOVERY, DAWAK 2017, 2017, 10440 : 228 - 243
  • [28] AN ATTENTION-BASED BACKEND ALLOWING EFFICIENT FINE-TUNING OF TRANSFORMER MODELS FOR SPEAKER VERIFICATION
    Peng, Junyi
    Plchot, Oldrich
    Stafylakis, Themos
    Mosner, Ladislav
    Burget, Lukas
    Cernocky, Jan
    2022 IEEE SPOKEN LANGUAGE TECHNOLOGY WORKSHOP, SLT, 2022, : 555 - 562
  • [29] Faster Convergence for Transformer Fine-tuning with Line Search Methods
    Kenneweg, Philip
    Galli, Leonardo
    Kenneweg, Tristan
    Hammer, Barbara
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [30] FINE-TUNING APPROACH TO NIR FACE RECOGNITION
    Kim, Jeyeon
    Jo, Hoon
    Ra, Moonsoo
    Kim, Whoi-Yul
    2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 2337 - 2341