Deep Reinforcement Learning for Active Target Tracking

被引:4
|
作者
Jeong, Heejin [1 ]
Hassani, Hamed [1 ]
Morari, Manfred [1 ]
Lee, Daniel D. [2 ]
Pappas, George J. [1 ]
机构
[1] Univ Penn, Dept Elect & Syst Engn, Philadelphia, PA 19104 USA
[2] Cornell Univ, Dept Elect & Comp Engn, Ithaca, NY 14850 USA
关键词
D O I
10.1109/ICRA48506.2021.9561258
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We solve active target tracking, one of the essential tasks in autonomous systems, using a deep reinforcement learning (RL) approach. In this problem, an autonomous agent is tasked with acquiring information about targets of interests using its on-board sensors. The classical challenges in this problem are system model dependence and the difficulty of computing information-theoretic cost functions for a long planning horizon. RL provides solutions for these challenges as the length of its effective planning horizon does not affect the computational complexity, and it drops the strong dependency of an algorithm on system models. In particular, we introduce Active Tracking Target Network (ATTN), a unified deep RL policy that is capable of solving major sub-tasks of active target tracking - in-sight tracking, navigation, and exploration. The policy shows robust behavior for tracking agile and anomalous targets with a partially known target model. Additionally, the same policy is able to navigate in obstacle environments to reach distant targets as well as explore the environment when targets are positioned in unexpected locations.
引用
收藏
页码:1825 / 1831
页数:7
相关论文
共 50 条
  • [41] Target tracking based on incremental deep learning
    Cheng, Shuai
    Sun, Jun-Xi
    Cao, Yong-Gang
    Zhao, Li-Rong
    Guangxue Jingmi Gongcheng/Optics and Precision Engineering, 2015, 23 (04): : 1161 - 1170
  • [42] Learning active manipulation to target shapes with model-free, long-horizon deep reinforcement learning
    Sivertsvik, Matias
    Sumskiy, Kirill
    Misimi, Ekrem
    2024 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA 2024, 2024, : 5411 - 5418
  • [43] Deep Reinforcement Learning with Iterative Shift for Visual Tracking
    Ren, Liangliang
    Yuan, Xin
    Lu, Jiwen
    Yang, Ming
    Zhou, Jie
    COMPUTER VISION - ECCV 2018, PT IX, 2018, 11213 : 697 - 713
  • [44] Visual Tracking via Hierarchical Deep Reinforcement Learning
    Zhang, Dawei
    Zheng, Zhonglong
    Jia, Riheng
    Li, Minglu
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 3315 - 3323
  • [45] On the Use of Deep Reinforcement Learning for Visual Tracking: A Survey
    Cruciata, Giorgio
    Lo Presti, Liliana
    La Cascia, Marco
    IEEE ACCESS, 2021, 9 : 120880 - 120900
  • [46] Exploring Deep Reinforcement Learning for Autonomous Powerline Tracking
    Pienroj, Panin
    Schonborn, Sandro
    Birke, Robert
    IEEE CONFERENCE ON COMPUTER COMMUNICATIONS WORKSHOPS (IEEE INFOCOM 2019 WKSHPS), 2019, : 496 - 501
  • [47] Deep Reinforcement Learning for Data Association in Cell Tracking
    Wang Junjie
    Su Xiaohong
    Zhao Lingling
    Zhang Jun
    FRONTIERS IN BIOENGINEERING AND BIOTECHNOLOGY, 2020, 8
  • [48] Drift-Proof Tracking With Deep Reinforcement Learning
    Chen, Zhongze
    Li, Jing
    Wu, Jia
    Chang, Jun
    Xiao, Yafu
    Wang, Xiaoting
    IEEE TRANSACTIONS ON MULTIMEDIA, 2022, 24 : 609 - 624
  • [49] Decision Controller for Object Tracking With Deep Reinforcement Learning
    Zhong, Zhao
    Yang, Zichen
    Feng, Weitao
    Wu, Wei
    Hu, Yangyang
    Liu, Cheng-Lin
    IEEE ACCESS, 2019, 7 : 28069 - 28079
  • [50] Robot target tracking control considering obstacle avoidance based on combination of deep reinforcement learning and PID
    Liu, Yong
    Jiang, Xiao
    Li, Xiang
    Sun, Boxi
    Qian, Sen
    Wu, Yihao
    PROCEEDINGS OF THE INSTITUTION OF MECHANICAL ENGINEERS PART I-JOURNAL OF SYSTEMS AND CONTROL ENGINEERING, 2025, 239 (03)