Robust multi-task learning and online refinement for spacecraft pose estimation across domain gap

被引:33
作者
Park, Tae Ha [1 ]
D'Amico, Simone [1 ]
机构
[1] Stanford Univ, Dept Aeronaut & Astronaut, 496 Lomita Mall, Stanford, CA 94305 USA
关键词
Vision-only navigation; Rendezvous; Pose estimation; Computer vision; Deep learning; Domain gap;
D O I
10.1016/j.asr.2023.03.036
中图分类号
V [航空、航天];
学科分类号
08 ; 0825 ;
摘要
This work presents Spacecraft Pose Network v2 (SPNv2), a Convolutional Neural Network (CNN) for pose estimation of noncooperative spacecraft across domain gap. SPNv2 is a multi-scale, multi-task CNN which consists of a shared multi-scale feature encoder and multiple prediction heads that perform different tasks on a shared feature output. These tasks are all related to detection and pose estimation of a target spacecraft from an image, such as prediction of pre-defined satellite keypoints, direct pose regression, and binary segmentation of the satellite foreground. It is shown that by jointly training on different yet related tasks with extensive data augmentations on synthetic images only, the shared encoder learns features that are common across image domains that have fundamentally different visual characteristics compared to synthetic images. This work also introduces Online Domain Refinement (ODR) which refines the parameters of the normalization layers of SPNv2 on the target domain images online at deployment. Specifically, ODR performs self-supervised entropy minimization of the predicted satellite foreground, thereby improving the CNN's performance on the target domain images without their pose labels and with minimal computational efforts. The GitHub repository for SPNv2 is available at https://github.com/tpark94/spnv2. (c) 2023 COSPAR. Published by Elsevier B.V. All rights reserved.
引用
收藏
页码:5726 / 5740
页数:15
相关论文
共 54 条
[1]   A theory of learning from different domains [J].
Ben-David, Shai ;
Blitzer, John ;
Crammer, Koby ;
Kulesza, Alex ;
Pereira, Fernando ;
Vaughan, Jennifer Wortman .
MACHINE LEARNING, 2010, 79 (1-2) :151-175
[2]  
Black K., 2021, 31 AAS AIAA SPAC FLI
[3]  
Bukschat Y, 2020, Arxiv, DOI [arXiv:2011.04307, DOI 10.48550/ARXIV.2011.04307]
[4]   Albumentations: Fast and Flexible Image Augmentations [J].
Buslaev, Alexander ;
Iglovikov, Vladimir I. ;
Khvedchenya, Eugene ;
Parinov, Alex ;
Druzhinin, Mikhail ;
Kalinin, Alexandr A. .
INFORMATION, 2020, 11 (02)
[5]   Multitask learning [J].
Caruana, R .
MACHINE LEARNING, 1997, 28 (01) :41-75
[6]   On-ground validation of a CNN-based monocular pose estimation system for uncooperative spacecraft: Bridging domain shift in rendezvous scenarios [J].
Cassinis, Lorenzo Pasqualetto ;
Menicucci, Alessandra ;
Gill, Eberhard ;
Ahrns, Ingo ;
Sanchez-Gestido, Manuel .
ACTA ASTRONAUTICA, 2022, 196 :123-138
[7]   Evaluation of tightly- and loosely-coupled approaches in CNN-based pose estimation systems for uncooperative spacecraft [J].
Cassinis, Lorenzo Pasqualetto ;
Fonod, Robert ;
Gill, Eberhard ;
Ahrns, Ingo ;
Gil-Fernandez, Jesus .
ACTA ASTRONAUTICA, 2021, 182 :189-202
[8]   Review of the robustness and applicability of monocular pose estimation systems for relative navigation with an uncooperative spacecraft [J].
Cassinis, Lorenzo Pasqualetto ;
Fonod, Robert ;
Gill, Eberhard .
PROGRESS IN AEROSPACE SCIENCES, 2019, 110
[9]   Satellite Pose Estimation with Deep Landmark Regression and Nonlinear Pose Refinement [J].
Chen, Bo ;
Cao, Jiewei ;
Parra, Alvaro ;
Chin, Tat-Jun .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW), 2019, :2816-2824
[10]  
Chen T, 2020, PR MACH LEARN RES, V119