Dual-referenced assistive network for action quality assessment

被引:0
作者
Huang, Keyi [1 ]
Tian, Yi [1 ]
Yu, Chen [1 ]
Huang, Yaping [1 ]
机构
[1] Beijing Jiaotong Univ, Sch Comp Sci & Technol, Beijing Key Lab Traff Data Anal & Min, Beijing 100044, Peoples R China
基金
中国国家自然科学基金;
关键词
Action quality assessment; Human action understanding; VIDEO;
D O I
10.1016/j.neucom.2024.128786
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Action quality assessment (AQA) aims to evaluate the performing quality of a specific action. It is a challenging task as it requires to identify the subtle differences between the videos containing the same action. Most of existing AQA methods directly adopt a pretrained network designed for other tasks to extract video features, which are too coarse to describe fine-grained details of action quality. In this paper, we propose a novel Dual-Referenced Assistive (DuRA) network to polish original coarse-grained features into fine-grained quality-oriented representations. Specifically, we introduce two levels of referenced assistants to highlight the discriminative quality-related contents by comparing a target video and the referenced objects, instead of obtrusively estimating the quality score from an individual video. Firstly, we design a Rating-guided Attention module, which takes advantage of a series of semantic-level referenced assistants to acquire implicit hierarchical semantic knowledge and progressively emphasize quality-focused features embedded in original inherent information. Subsequently, we further design a couple of Consistency Preserving constraints, which introduce a set of individual-level referenced assistants to further eliminate score-unrelated information through more detailed comparisons of differences between actions. The experiments show that our proposed method achieves promising performance on the AQA-7 and MTL-AQA datasets.
引用
收藏
页数:10
相关论文
共 50 条
  • [1] Pairwise Contrastive Learning Network for Action Quality Assessment
    Li, Mingzhe
    Zhang, Hong-Bo
    Lei, Qing
    Fan, Zongwen
    Liu, Jinghua
    Du, Ji-Xiang
    COMPUTER VISION - ECCV 2022, PT IV, 2022, 13664 : 457 - 473
  • [2] Action quality assessment via moment aware network
    Jifeng Han
    Yanduo Zhang
    Tao Lu
    Jiaming Wang
    Evolving Systems, 2025, 16 (2)
  • [3] Gaussian guided frame sequence encoder network for action quality assessment
    Ming-Zhe Li
    Hong-Bo Zhang
    Li-Jia Dong
    Qing Lei
    Ji-Xiang Du
    Complex & Intelligent Systems, 2023, 9 : 1963 - 1974
  • [4] Gaussian guided frame sequence encoder network for action quality assessment
    Li, Ming-Zhe
    Zhang, Hong-Bo
    Dong, Li-Jia
    Lei, Qing
    Du, Ji-Xiang
    COMPLEX & INTELLIGENT SYSTEMS, 2023, 9 (02) : 1963 - 1974
  • [5] Localization-assisted Uncertainty Score Disentanglement Network for Action Quality Assessment
    Ji, Yanli
    Ye, Lingfeng
    Huang, Huili
    Mao, Lijing
    Zhou, Yang
    Gao, Lingling
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 8590 - 8597
  • [6] Multimodal Action Quality Assessment
    Zeng, Ling-An
    Zheng, Wei-Shi
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2024, 33 : 1600 - 1613
  • [7] Hierarchical Graph Convolutional Networks for Action Quality Assessment
    Zhou, Kanglei
    Ma, Yue
    Shum, Hubert P. H.
    Liang, Xiaohui
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2023, 33 (12) : 7749 - 7763
  • [8] TSA-Net: Tube Self-Attention Network for Action Quality Assessment
    Wang, Shunli
    Yang, Dingkang
    Zhai, Peng
    Chen, Chixiao
    Zhang, Lihua
    PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 4902 - 4910
  • [9] Self-Supervised Sub-Action Parsing Network for Semi-Supervised Action Quality Assessment
    Gedamu, Kumie
    Ji, Yanli
    Yang, Yang
    Shao, Jie
    Shen, Heng Tao
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2024, 33 : 6057 - 6070
  • [10] Fine-Grained Spatio-Temporal Parsing Network for Action Quality Assessment
    Gedamu, Kumie
    Ji, Yanli
    Yang, Yang
    Shao, Jie
    Shen, Heng Tao
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2023, 32 : 6386 - 6400