Unsupervised Domain Transfer for Task Automation in Unmanned Underwater Vehicle Intervention Operations

被引:0
作者
Sans-Muntadas, Albert [1 ]
Skaldebo, Martin B. [1 ]
Nielsen, Mikkel Cornelius [1 ,2 ]
Schjolberg, Ingrid [1 ]
机构
[1] Norwegian Univ Sci & Technol, Dept Marine Technol, N-7491 Trondheim, Norway
[2] UBIQ Aerosp, N-7011 Trondheim, Norway
关键词
Domain adaptation; neural networks; object detection; synthetic data; underwater vehicles;
D O I
10.1109/JOE.2021.3126016
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
During underwater intervention and monitoring operations, large amounts of imagery and sensory data are produced and stored. Moreover, these data might have the potential to help automatize future operations. In this article, we propose a method for producing segmented data that can be utilized in training of neural networks used for performing desired control tasks. By combining unlabeled images from previous operations with a paired 3-D drawing of the same monitored object, we can train a generative adversarial network to learn the domain adaption between two domains and produce synthetic images with a high resemblance to the original footage. The clear advantage is that the enhanced synthetic images contain the segmented information of the object without the expensive cost of manually segmenting the images. The enhanced segmented data are used to train an object detector that predicts bounding boxes locating the segmented object. This is used in two ways: 1) to analyze the quality of the segmentation and 2) to command a control task for a remotely operated vehicle relative to the object of interest. Controlling the yaw is the main control task to maintain the object centered in the camera frame. Additionally, we explore how overtraining of the domain adaptation network negatively impacts the accuracy of the object detector. This is done by comparing the accuracy of the object detector trained with the datasets produced by different epochs of the domain adaptation network.
引用
收藏
页码:312 / 321
页数:10
相关论文
共 34 条
  • [1] Airbus, 2020, WAYF
  • [2] Learning dexterous in-hand manipulation
    Andrychowicz, Marcin
    Baker, Bowen
    Chociej, Maciek
    Jozefowicz, Rafal
    McGrew, Bob
    Pachocki, Jakub
    Petron, Arthur
    Plappert, Matthias
    Powell, Glenn
    Ray, Alex
    Schneider, Jonas
    Sidor, Szymon
    Tobin, Josh
    Welinder, Peter
    Weng, Lilian
    Zaremba, Wojciech
    [J]. INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2020, 39 (01) : 3 - 20
  • [3] B. O. Community, 2018, BLENDER 3D MODELLING
  • [4] Self-driving cars: A survey
    Badue, Claudine
    Guidolini, Ranik
    Carneiro, Raphael Vivacqua
    Azevedo, Pedro
    Cardoso, Vinicius B.
    Forechi, Avelino
    Jesus, Luan
    Berriel, Rodrigo
    Paixao, Thiago M.
    Mutz, Filipe
    Veronese, Lucas de Paula
    Oliveira-Santos, Thiago
    De Souza, Alberto F.
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2021, 165
  • [5] Blue Robotics Inc, 2020, BLUEROV
  • [6] Bojarski Mariusz, 2016, arXiv
  • [7] Boujemaa KS, 2017, 2017 INTERNATIONAL CONFERENCE ON WIRELESS NETWORKS AND MOBILE COMMUNICATIONS (WINCOM), P374
  • [8] Bousmalis K, 2018, IEEE INT CONF ROBOT, P4243
  • [9] Underwater Object Segmentation Based on Optical Features
    Chen, Zhe
    Zhang, Zhen
    Bu, Yang
    Dai, Fengzhao
    Fan, Tanghuai
    Wang, Huibin
    [J]. SENSORS, 2018, 18 (01)
  • [10] Monocular Vision-Based Underwater Object Detection
    Chen, Zhe
    Zhang, Zhen
    Dai, Fengzhao
    Bu, Yang
    Wang, Huibin
    [J]. SENSORS, 2017, 17 (08)