Multi-Object Tracking With Separation in Deep Space

被引:0
作者
Hu, Mengjie [1 ]
Wang, Haotian [1 ]
Wang, Hao [2 ]
Li, Binyu [2 ]
Cao, Shixiang [2 ]
Zhan, Tao [1 ]
Zhu, Xiaotong [3 ]
Liu, Tianqi [4 ]
Liu, Chun [1 ]
Song, Qing [1 ]
机构
[1] Beijing Univ Posts & Telecommun, Sch Intelligent Engn & Automat, Beijing 100876, Peoples R China
[2] Beijing Inst Space Mech & Elect, Beijing 100094, Peoples R China
[3] Beihang Univ, State Key Lab Complex & Crit Software Environm, Beijing 100191, Peoples R China
[4] Tsinghua Univ, Dept Elect Engn, Beijing 100084, Peoples R China
来源
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING | 2025年 / 63卷
关键词
Trajectory; Tracking; Deep-space communications; Videos; Transformers; Predictive models; Kalman filters; Feature extraction; Deep learning; Satellites; Deep space; graph network; multi-object tracking (MOT); object separation; position encoding; transformer encoder; FILTER;
D O I
10.1109/TGRS.2024.3522290
中图分类号
P3 [地球物理学]; P59 [地球化学];
学科分类号
0708 ; 070902 ;
摘要
In deep space environment, some objects may split into several small fragments during movement, and these deep space objects often appear as points in satellite images. In this article, we conduct research on multi-object tracking (MOT) for these objects. First, we propose a simulation dataset, ScatterDataset, which simulates the movement and separation of objects in deep space background. By assigning two IDs to a trajectory, we describe the trajectory's relationship before and after separation. Second, we present an end-to-end motion association model, ScatterNet, which encodes the position information of trajectories and detections into motion features. These features are processed through temporal aggregation by a Transformer encoder and spatial aggregation by a graph network; then, we get the association results by calculating the similarity between these features. Finally, we introduce a tracker, ScatterTracker, which is suitable for tracking in scenarios with object separation. Experiments with state-of-the-art tracking methods on ScatterDataset demonstrate that our approach has achieved significant performance improvements in deep space scenarios. The code is available at: https://github.com/wht-bupt/ScatterTrack.
引用
收藏
页数:20
相关论文
共 52 条
[1]   SMALL MOVING TARGET MOT TRACKING WITH GM-PHD FILTER AND ATTENTION-BASED CNN [J].
Aguilar, Camilo ;
Ortner, Mathias ;
Zerubia, Josiane .
2021 IEEE 31ST INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING (MLSP), 2021,
[2]  
[Anonymous], 2009, P 12 IEEE INT WORKSH
[3]   Needles in a Haystack: Tracking City-Scale Moving Vehicles From Continuously Moving Satellite [J].
Ao, Wei ;
Fu, Yanwei ;
Hou, Xiyue ;
Xu, Feng .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 :1944-1957
[4]  
Bashar M., 2022, arXiv
[5]   Evaluating Multiple Object Tracking Performance: The CLEAR MOT Metrics [J].
Bernardin, Keni ;
Stiefelhagen, Rainer .
EURASIP JOURNAL ON IMAGE AND VIDEO PROCESSING, 2008, 2008 (1)
[6]  
Bewley A, 2016, IEEE IMAGE PROC, P3464, DOI 10.1109/ICIP.2016.7533003
[7]   Learning a Neural Solver for Multiple Object Tracking [J].
Braso, Guillem ;
Leal-Taixe, Laura .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :6246-6256
[8]   MeMOT: Multi-Object Tracking with Memory [J].
Cai, Jiarui ;
Xu, Mingze ;
Li, Wei ;
Xiong, Yuanjun ;
Xia, Wei ;
Tu, Zhuowen ;
Soatto, Stefano .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, :8080-8090
[9]   WILDTRACK: A Multi-camera HD Dataset for Dense Unscripted Pedestrian Detection [J].
Chavdarova, Tatjana ;
Baque, Pierre ;
Bouquet, Stephane ;
Maksai, Andrii ;
Jose, Cijo ;
Bagautdinov, Timur ;
Lettry, Louis ;
Fua, Pascal ;
Van Gool, Luc ;
Fleuret, Francois .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :5030-5039
[10]   Factors Influencing Pediatric Emergency Department Visits for Low-Acuity Conditions [J].
Long, Christina M. ;
Mehrhoff, Casey ;
Abdel-Latief, Eman ;
Rech, Megan ;
Laubham, Matthew .
PEDIATRIC EMERGENCY CARE, 2021, 37 (05) :265-268