Consistency-constrained RGB-T crowd counting via mutual information maximization

被引:3
作者
Guo, Qiang [1 ]
Yuan, Pengcheng [1 ]
Huang, Xiangming [1 ]
Ye, Yangdong [1 ]
机构
[1] Zhengzhou Univ, Sch Comp & Artificial Intelligence, 100 Sci Ave, Zhengzhou 450001, Henan, Peoples R China
基金
中国国家自然科学基金;
关键词
Cross-modal; RGB-T crowd counting; Consistent information; Mutual information;
D O I
10.1007/s40747-024-01427-x
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The incorporation of thermal imaging data in RGB-T images has demonstrated its usefulness in cross-modal crowd counting by offering complementary information to RGB representations. Despite achieving satisfactory results in RGB-T crowd counting, many existing methods still face two significant limitations: (1) The oversight of the heterogeneous gap between modalities complicates the effective integration of multimodal features. (2) The absence of mining consistency hinders the full exploitation of the unique complementary strengths inherent in each modality. To this end, we present C4-MIM, a novel Consistency-constrained RGB-T Crowd Counting approach via Mutual Information Maximization. It effectively leverages multimodal information by learning the consistency between the RGB and thermal modalities, thereby enhancing the performance of cross-modal counting. Specifically, we first advocate extracting feature representations of different modalities in a shared encoder to moderate the heterogeneous gap since they obey the identical coding rules with shared parameters. Then, we intend to mine the consistent information of different modalities to better learn conducive information and improve the performance of feature representations. To this end, we formulate the complementarity of multimodality representations as a mutual information maximization regularizer to maximize the consistent information of different modalities, in which the consistency would be maximally attained before combining the multimodal information. Finally, we simply aggregate the feature representations of the different modalities and send them into a regressor to output the density maps. The proposed approach can be implemented by arbitrary backbone networks and is quite robust in the face of single modality unavailable or serious compromised. Extensively experiments have been conducted on the RGBT-CC and DroneRGBT benchmarks to evaluate the effectiveness and robustness of the proposed approach, demonstrating its superior performance compared to the SOTA approaches.
引用
收藏
页码:5049 / 5070
页数:22
相关论文
共 54 条
[51]   Segmentation and tracking of multiple humans in crowded environments [J].
Zhao, Tao ;
Nevatia, Ram ;
Wu, Bo .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2008, 30 (07) :1198-1211
[52]   Mutual Information-driven Pan-sharpening [J].
Zhou, Man ;
Yan, Keyu ;
Huang, Jie ;
Yang, Zihe ;
Fu, Xueyang ;
Zhao, Feng .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, :1788-1798
[53]  
Zhou W., 2023, IEEE Transactions on Intelligent Transportation Systems
[54]   DEFNet: Dual-Branch Enhanced Feature Fusion Network for RGB-T Crowd Counting [J].
Zhou, Wujie ;
Pan, Yi ;
Lei, Jingsheng ;
Ye, Lv ;
Yu, Lu .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (12) :24540-24549