共 50 条
A DeNoising FPN With Transformer R-CNN for Tiny Object Detection
被引:16
|作者:
Liu, Hou-, I
[1
]
Tseng, Yu-Wen
[2
]
Chang, Kai-Cheng
[2
]
Wang, Pin-Jyun
[1
]
Shuai, Hong-Han
[1
]
Cheng, Wen-Huang
[3
]
机构:
[1] Natl Yang Ming Chiao Tung Univ, Dept Elect & Elect Engn, Hsinchu 300, Taiwan
[2] Natl Yang Ming Chiao Tung Univ, Inst Elect, Hsinchu 300, Taiwan
[3] Natl Taiwan Univ NTU, Dept Comp Sci & Informat Engn, Taipei 106, Taiwan
来源:
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING
|
2024年
/
62卷
关键词:
Feature extraction;
Semantics;
Object detection;
Noise;
Detectors;
Transformers;
Noise reduction;
Aerial image;
contrastive learning;
noise reduction;
tiny object detection;
transformer-based detector;
DISTANCE;
NETWORK;
D O I:
10.1109/TGRS.2024.3396489
中图分类号:
P3 [地球物理学];
P59 [地球化学];
学科分类号:
0708 ;
070902 ;
摘要:
Despite notable advancements in the field of computer vision (CV), the precise detection of tiny objects continues to pose a significant challenge, largely due to the minuscule pixel representation allocated to these objects in imagery data. This challenge resonates profoundly in the domain of geoscience and remote sensing, where high-fidelity detection of tiny objects can facilitate a myriad of applications ranging from urban planning to environmental monitoring. In this article, we propose a new framework, namely, DeNoising feature pyramid network (FPN) with Trans R-CNN (DNTR), to improve the performance of tiny object detection. DNTR consists of an easy plug-in design, DeNoising FPN (DN-FPN), and an effective Transformer-based detector, Trans region-based convolutional neural network (R-CNN). Specifically, feature fusion in the FPN is important for detecting multiscale objects. However, noisy features may be produced during the fusion process since there is no regularization between the features of different scales. Therefore, we introduce a DN-FPN module that utilizes contrastive learning to suppress noise in each level's features in the top-down path of FPN. Second, based on the two-stage framework, we replace the obsolete R-CNN detector with a novel Trans R-CNN detector to focus on the representation of tiny objects with self-attention. The experimental results manifest that our DNTR outperforms the baselines by at least 17.4% in terms of $\text {AP}_{vt}$ on the AI-TOD dataset and 9.6% in terms of average precision (AP) on the VisDrone dataset, respectively. Our code will be available at https://github.com/hoiliu-0801/DNTR.
引用
收藏
页码:1 / 15
页数:15
相关论文