Enhancing continuous control of mobile robots for end-to-end visual active tracking

被引:26
作者
Devo, Alessandro [1 ]
Dionigi, Alberto [1 ]
Costante, Gabriele [1 ]
机构
[1] Univ Perugia, Dept Engn, I-06125 Perugia, Italy
关键词
Visual active tracking; Deep learning for robotic applications; Reinforcement learning; SIMULATION;
D O I
10.1016/j.robot.2021.103799
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In the last decades, visual target tracking has been one of the primary research interests of the Robotics research community. The recent advances in Deep Learning technologies have made the exploitation of visual tracking approaches effective and possible in a wide variety of applications, ranging from automotive to surveillance and human assistance. However, the majority of the existing works focus exclusively on passive visual tracking, i.e., tracking elements in sequences of images by assuming that no actions can be taken to adapt the camera position to the motion of the tracked entity. On the contrary, in this work, we address visual active tracking, in which the tracker has to actively search for and track a specified target. Current State-of-the-Art approaches use Deep Reinforcement Learning (DRL) techniques to address the problem in an end-to-end manner. However, two main problems arise: (i) most of the contributions focus only on discrete action spaces, and the ones that consider continuous control do not achieve the same level of performance; and (ii) if not properly tuned, DRL models can be challenging to train, resulting in considerably slow learning progress and poor final performance. To address these challenges, we propose a novel DRL-based visual active tracking system that provides continuous action policies. To accelerate training and improve the overall performance, we introduce additional objective functions and a Heuristic Trajectory Generator (HTG) to facilitate learning. Through extensive experimentation, we show that our method can reach and surpass other State-of-the-Art approaches performances, and demonstrate that, even if trained exclusively in simulation, it can successfully perform visual active tracking even in real scenarios. (C) 2021 Elsevier B.V. All rights reserved.
引用
收藏
页数:14
相关论文
共 50 条
[1]   Learning dexterous in-hand manipulation [J].
Andrychowicz, Marcin ;
Baker, Bowen ;
Chociej, Maciek ;
Jozefowicz, Rafal ;
McGrew, Bob ;
Pachocki, Jakub ;
Petron, Arthur ;
Plappert, Matthias ;
Powell, Glenn ;
Ray, Alex ;
Schneider, Jonas ;
Sidor, Szymon ;
Tobin, Josh ;
Welinder, Peter ;
Weng, Lilian ;
Zaremba, Wojciech .
INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2020, 39 (01) :3-20
[2]   Long short-term memory [J].
Hochreiter, S ;
Schmidhuber, J .
NEURAL COMPUTATION, 1997, 9 (08) :1735-1780
[3]   A survey of robot learning from demonstration [J].
Argall, Brenna D. ;
Chernova, Sonia ;
Veloso, Manuela ;
Browning, Brett .
ROBOTICS AND AUTONOMOUS SYSTEMS, 2009, 57 (05) :469-483
[4]   Cognitive visual tracking and camera control [J].
Bellotto, Nicola ;
Benfold, Ben ;
Harland, Hanno ;
Nagel, Hans-Hellmut ;
Pirlo, Nicola ;
Reid, Ian ;
Sommerlade, Eric ;
Zhao, Chuan .
COMPUTER VISION AND IMAGE UNDERSTANDING, 2012, 116 (03) :457-471
[5]  
Billard A., 2008, HDB ROBOT, V59
[6]  
Bousmalis K, 2018, IEEE INT CONF ROBOT, P4243
[7]   Full-GRU Natural Language Video Description for Service Robotics Applications [J].
Cascianelli, Silvia ;
Costante, Gabriele ;
Ciarfuglia, Thomas A. ;
Valigi, Paolo ;
Fravolini, Mario L. .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2018, 3 (02) :841-848
[8]  
Celik Y., 2017, 2017 INT ART INT DAT, P1
[9]  
Cho K., 2014, P 8 WORKSH SYNT SEM
[10]  
Chung J., 2014, ARXIV, DOI DOI 10.48550/ARXIV.1412.3555