共 37 条
- [1] Zhai X., Kolesnikov A., Houlsby N., Beyer L., Scaling vision transformers, Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., pp. 12104-12113, (2022)
- [2] Jia M., Wu Z., Reiter A., Cardie C., Belongie S., Lim S.-N., Exploring visual engagement signals for representation learning, Proc. IEEE/CVF Int. Conf. Comput. Vis., pp. 4206-4217, (2021)
- [3] Devlin J., Chang M.-W., Lee K., Toutanova K., BERT: Pre-training of deep bidirectional transformers for language understanding, NAACLHLT, pp. 4171-4186, (2019)
- [4] Zaken E.B., Ravfogel S., Goldberg Y., BitFit: Simple parameter-efficient fine-tuning for transformer-based masked language-models, ACL, pp. 1-9, (2022)
- [5] He K., Chen X., Xie S., Li Y., Dollar P., Girshick R., Masked autoencoders are scalable vision learners, Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., pp. 16000-16009, (2022)
- [6] Hu E.J., Et al., LoRA: Low-rank adaptation of large language models, Proc. Int. Conf. Learn. Representations, (2022)
- [7] Qiu X., Hao T., Shi S., Tan X., Xiong Y.J., Chain-of-LoRA: Enhancing the instruction fine-tuning performance of low-rank adaptation on diverse instruction set, IEEE Signal Process. Lett., 31, pp. 875-879, (2024)
- [8] Qi W., Ruan Y.P., Zuo Y., Li T., Parameter-efficient tuning on layer normalization for pre-trained language models, CoRR
- [9] Houlsby N., Et al., Parameter-efficient transfer learning for NLP, Proc. 36th Int. Conf. Mach. Learn., pp. 2790-2799, (2019)
- [10] Chen S., Et al., Adaptformer: Adapting vision transformers for scalable visual recognition, Proc. Adv. Neural Inf. Process. Syst., pp. 16664-16678, (2022)