PAXION: Patching Action Knowledge in Video-Language Foundation Models

被引:0
作者
Wang, Zhenhailong [1 ]
Blume, Ansel [1 ]
Li, Sha [1 ]
Liu, Genglin [1 ]
Cho, Jaemin [2 ]
Tang, Zineng [2 ]
Bansal, Mohit [2 ]
Ji, Heng [1 ]
机构
[1] UIUC, Champaign, IL 61820 USA
[2] UNC, Chapel Hill, NC USA
来源
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023) | 2023年
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Action knowledge involves the understanding of textual, visual, and temporal aspects of actions. We introduce the Action Dynamics Benchmark (ActionBench) containing two carefully designed probing tasks: Action Antonym and Video Reversal, which targets multimodal alignment capabilities and temporal understanding skills of the model, respectively. Despite recent video-language models' (VidLM) impressive performance on various benchmark tasks, our diagnostic tasks reveal their surprising deficiency (near-random performance) in action knowledge, suggesting that current models rely on object recognition abilities as a shortcut for action understanding. To remedy this, we propose a novel framework, PAXION, along with a new Discriminative Video Dynamics Modeling (DVDM) objective. The PAXION framework utilizes a Knowledge Patcher network to encode new action knowledge and a Knowledge Fuser component to integrate the Patcher into frozen VidLMs without compromising their existing capabilities. Due to limitations of the widely-used Video-Text Contrastive (VTC) loss for learning action knowledge, we introduce the DVDM objective to train the Knowledge Patcher. DVDM forces the model to encode the correlation between the action text and the correct ordering of video frames. Our extensive analyses show that PAXION and DVDM together effectively fill the gap in action knowledge understanding (similar to 50% -> 80%), while maintaining or improving performance on a wide spectrum of both object- and action-centric downstream tasks. The code and data will be made publicly available for research purposes at https://github.com/MikeWangWZHL/Paxion.git.
引用
收藏
页数:21
相关论文
共 50 条
  • [41] Adapt2Reward: Adapting Video-Language Models to Generalizable Robotic Rewards via Failure Prompts
    Yang, Yanting
    Chen, Minghao
    Qiu, Qibo
    Wu, Jiahao
    Wang, Wenxiao
    Lin, Binbin
    Guan, Ziyu
    He, Xiaofei
    [J]. COMPUTER VISION-ECCV 2024, PT LVII, 2025, 15115 : 163 - 180
  • [42] All in One: Exploring Unified Video-Language Pre-training
    Wang, Jinpeng
    Ge, Yixiao
    Yan, Rui
    Ge, Yuying
    Lin, Kevin Qinghong
    Tsutsui, Satoshi
    Lin, Xudong
    Cai, Guanyu
    Wu, Jianping
    Shan, Ying
    Qie, Xiaohu
    Shou, Mike Zheng
    [J]. 2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, : 6598 - 6608
  • [43] Enhancing Video-Language Representations With Structural Spatio-Temporal Alignment
    Fei, Hao
    Wu, Shengqiong
    Zhang, Meishan
    Zhang, Min
    Chua, Tat-Seng
    Yan, Shuicheng
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (12) : 7701 - 7719
  • [44] PiTe: Pixel-Temporal Alignment for Large Video-Language Model
    Liu, Yang
    Ding, Pengxiang
    Huang, Siteng
    Zhang, Min
    Zhao, Han
    Wang, Donglin
    [J]. COMPUTER VISION - ECCV 2024, PT V, 2025, 15063 : 160 - 176
  • [45] Focus and Align: Learning Tube Tokens for Video-Language Pre-Training
    Zhu, Yongqing
    Li, Xiangyang
    Zheng, Mao
    Yang, Jiahao
    Wang, Zihan
    Guo, Xiaoqian
    Chai, Zifeng
    Yuan, Yuchen
    Jiang, Shuqiang
    [J]. IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 8036 - 8050
  • [46] HiTeA: Hierarchical Temporal-Aware Video-Language Pre-training
    Ye, Qinghao
    Xu, Guohai
    Yan, Ming
    Xu, Haiyang
    Qian, Qi
    Zhang, Ji
    Huang, Fei
    [J]. 2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 15359 - 15370
  • [47] VLM: Task-agnostic Video-Language Model Pre-training for Video Understanding
    Xu, Hu
    Ghosh, Gargi
    Huang, Po-Yao
    Arora, Prahal
    Aminzadeh, Masoumeh
    Feichtenhofer, Christoph
    Metze, Florian
    Zettlemoyer, Luke
    [J]. FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL-IJCNLP 2021, 2021, : 4227 - 4239
  • [48] SST-VLM: Sparse Sampling-Twice Inspired Video-Language Model
    Gao, Yizhao
    Lu, Zhiwu
    [J]. COMPUTER VISION - ACCV 2022, PT IV, 2023, 13844 : 537 - 553
  • [49] SMAUG: Sparse Masked Autoencoder for Efficient Video-Language Pre-training
    Lin, Yuanze
    Wei, Chen
    Wang, Huiyu
    Yuille, Alan
    Xie, Cihang
    [J]. 2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 2459 - 2469
  • [50] RTQ: Rethinking Video-language Understanding Based on Image-text Model
    Wang, Xiao
    Li, Yaoyu
    Gan, Tian
    Zhang, Zheng
    Lv, Jingjing
    Nie, Liqiang
    [J]. PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 557 - 566