共 50 条
- [31] Patch is enough: naturalistic adversarial patch against vision-language pre-training models Visual Intelligence, 2 (1):
- [32] Omniview-Tuning: Boosting Viewpoint Invariance of Vision-Language Pre-training Models COMPUTER VISION - ECCV 2024, PT XXVI, 2025, 15084 : 309 - 327
- [33] Coarse-to-Fine Vision-Language Pre-training with Fusion in the Backbone ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
- [34] Vision-Language Pre-Training: Basics, Recent Advances, and Future Trends FOUNDATIONS AND TRENDS IN COMPUTER GRAPHICS AND VISION, 2022, 14 (3-4): : 163 - 352
- [35] Position-guided Text Prompt for Vision-Language Pre-training 2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 23242 - 23251
- [36] Kaleido-BERT: Vision-Language Pre-training on Fashion Domain 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 12642 - 12652
- [38] ViLTA: Enhancing Vision-Language Pre-training through Textual Augmentation 2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 3135 - 3146
- [39] Subsampling of Frequent Words in Text for Pre-training a Vision-Language Model PROCEEDINGS OF THE 1ST WORKSHOP ON LARGE GENERATIVE MODELS MEET MULTIMODAL APPLICATIONS, LGM3A 2023, 2023, : 61 - 67
- [40] Fine-Grained Semantically Aligned Vision-Language Pre-Training ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,