REVERSE DOMAIN ADAPTATION FOR INDOOR CAMERA POSE REGRESSION

被引:0
|
作者
Acharya, Debaditya [1 ]
Khoshelham, Kourosh [2 ,3 ]
机构
[1] RMIT Univ, Geospatial Sci, Melbourne, Vic 3000, Australia
[2] Univ Melbourne, Dept Infrastruct Engn, Parkville, Vic 3010, Australia
[3] Bldg 4-0 CRC, Caulfield, Vic 3145, Australia
来源
GEOSPATIAL WEEK 2023, VOL. 10-1 | 2023年
关键词
Domain adaptation; GAN; deep learning; Indoor localization; 3D building models; camera pose regression; BIM;
D O I
10.5194/isprs-annals-X-1-W1-2023-453-2023
中图分类号
K85 [文物考古];
学科分类号
0601 ;
摘要
Synthetic images have been used to mitigate the scarcity of annotated data for training deep learning approaches, followed by domain adaptation that reduces the gap between synthetic and real images. One such approach is using Generative Adversarial Networks (GANs) such as CycleGAN to bridge the domain gap where the synthetic images are translated into real-looking synthetic images that are used to train the deep learning models. In this article, we explore the less intuitive alternate strategy for domain adaption in the reverse direction; i.e., real-to-synthetic adaptation. We train the deep learning models with synthetic data directly, and then during inference we apply domain adaptation to convert the real images to synthetic-looking real images using CycleGAN. This strategy reduces the amount of data conversion required during the training, can potentially generate artefact-free images compared to the harder synthetic-to-real case, and can improve the performance of deep learning models. We demonstrate the success of this strategy in indoor localisation by experimenting with camera pose regression. The experimental results indicate an improvement in localisation accuracy is observed with the proposed domain adaptation as compared to the synthetic-to-real adaptation.
引用
收藏
页码:453 / 460
页数:8
相关论文
共 50 条
  • [31] Pixel-Level Domain Adaptation for Real-to-Sim Object Pose Estimation
    Qian, Kun
    Duan, Yanhui
    Luo, Chaomin
    Zhao, Yongqiang
    Jing, Xingshuo
    IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, 2023, 15 (03) : 1618 - 1627
  • [32] Weakly Supervised Hand Pose Recovery with Domain Adaptation by Low-Rank Alignment
    Hong, Chaoqun
    Yu, Jun
    Xie, Rongsheng
    Tao, Dapeng
    2016 IEEE 16TH INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS (ICDMW), 2016, : 446 - 453
  • [33] Domain adaptation for regression under Beer-Lambert's law
    Nikzad-Langerodi, Ramin
    Zellinger, Werner
    Saminger-Platz, Susanne
    Moser, Bernhard A.
    KNOWLEDGE-BASED SYSTEMS, 2020, 210
  • [34] Unsupervised Multi-camera Domain Adaptation for Object Detection in Cultural Sites
    Pasqualino, Giovanni
    Furnari, Antonino
    Farinella, Giovanni Maria
    IMAGE ANALYSIS AND PROCESSING, ICIAP 2022, PT I, 2022, 13231 : 713 - 724
  • [35] Automatic Update for Wi-Fi Fingerprinting Indoor Localization via Multi-Target Domain Adaptation
    Wang, Jiankun
    Zhao, Zenghua
    Ou, Mengling
    Cui, Jiayang
    Wu, Bin
    PROCEEDINGS OF THE ACM ON INTERACTIVE MOBILE WEARABLE AND UBIQUITOUS TECHNOLOGIES-IMWUT, 2023, 7 (02):
  • [36] A two-layer regression network for robust and accurate domain adaptation
    Lee, Geonseok
    Lee, Kichun
    PATTERN RECOGNITION, 2025, 158
  • [37] Integral Human Pose Regression
    Sun, Xiao
    Xiao, Bin
    Wei, Fangyin
    Liang, Shuang
    Wei, Yichen
    COMPUTER VISION - ECCV 2018, PT VI, 2018, 11210 : 536 - 553
  • [38] Deep Kinematic Pose Regression
    Zhou, Xingyi
    Sun, Xiao
    Zhang, Wei
    Liang, Shuang
    Wei, Yichen
    COMPUTER VISION - ECCV 2016 WORKSHOPS, PT III, 2016, 9915 : 186 - 201
  • [39] BIM-PoseNet: Indoor camera localisation using a 3D indoor model and deep learning from synthetic images
    Acharya, Debaditya
    Khoshelham, Kourosh
    Winter, Stephan
    ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING, 2019, 150 : 245 - 258
  • [40] 3D Hand Pose Estimation with a Single Infrared Camera via Domain Transfer Learning
    Park, Gabyong
    Kim, Tae-Kyun
    Woo, Woontack
    2020 IEEE INTERNATIONAL SYMPOSIUM ON MIXED AND AUGMENTED REALITY (ISMAR 2020), 2020, : 588 - 599