Distributed Path Planning for Executing Cooperative Tasks with Time Windows

被引:10
作者
Bhat, Raghavendra [1 ]
Yazicioglu, Yasin [1 ]
Aksaray, Derya [2 ]
机构
[1] Univ Minnesota, Dept Elect & Comp Engn, Minneapolis, MN 55455 USA
[2] Univ Minnesota, Dept Aerosp Engn & Mech, Minneapolis, MN 55455 USA
来源
IFAC PAPERSONLINE | 2019年 / 52卷 / 20期
关键词
Distributed control; multi-robot systems; planning; game theory; learning;
D O I
10.1016/j.ifacol.2019.12.156
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We investigate the distributed planning of robot trajectories for optimal execution of cooperative tasks with time windows. In this setting, each task has a value and is completed if sufficiently many robots are simultaneously present at the necessary location within the specified time window. Tasks keep arriving periodically over cycles. The task specifications (required number of robots, location, time window, and value) are unknown a priori and the robots try to maximize the value of completed tasks by planning their own trajectories for the upcoming cycle based on their past observations in a distributed manner. Considering the recharging and maintenance needs, robots are required to start and end each cycle at their assigned stations located in the environment. We map this problem to a game theoretic formulation and maximize the collective performance through distributed learning. Some simulation results are also provided to demonstrate the performance of the proposed approach. Copyright (C) 2019. The Authors. Published by Elsevier Ltd. All rights reserved.
引用
收藏
页码:187 / 192
页数:6
相关论文
共 15 条
[1]  
[Anonymous], 2006, Planning algorithms, Complexity
[2]   Efficient Routing Algorithms for Multiple Vehicles With no Explicit Communications [J].
Arsie, Alessandro ;
Savla, Ketan ;
Frazzoli, Emilio .
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2009, 54 (10) :2302-2317
[3]   Autonomous vehicle-target assignment: A game-theoretical formulation [J].
Arslan, Guerdal ;
Marden, Jason R. ;
Shamma, Jeff S. .
JOURNAL OF DYNAMIC SYSTEMS MEASUREMENT AND CONTROL-TRANSACTIONS OF THE ASME, 2007, 129 (05) :584-596
[4]   Multi-agent Path Planning with Multiple Tasks and Distance Constraints [J].
Bhattacharya, Subhrajit ;
Likhachev, Maxim ;
Kumar, Vijay .
2010 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2010, :953-959
[5]   THE STATISTICAL-MECHANICS OF STRATEGIC INTERACTION [J].
BLUME, LE .
GAMES AND ECONOMIC BEHAVIOR, 1993, 5 (03) :387-424
[6]   Distributed optimization and statistical learning via the alternating direction method of multipliers [J].
Boyd S. ;
Parikh N. ;
Chu E. ;
Peleato B. ;
Eckstein J. .
Foundations and Trends in Machine Learning, 2010, 3 (01) :1-122
[7]   A comprehensive survey of multiagent reinforcement learning [J].
Busoniu, Lucian ;
Babuska, Robert ;
De Schutter, Bart .
IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART C-APPLICATIONS AND REVIEWS, 2008, 38 (02) :156-172
[8]   Sampling-Based Robot Motion Planning: A Review [J].
Elbanhawi, Mohamed ;
Simic, Milan .
IEEE ACCESS, 2014, 2 :56-77
[9]   Revisiting log-linear learning: Asynchrony, completeness and payoff-based implementation [J].
Marden, Jason R. ;
Shamma, Jeff S. .
GAMES AND ECONOMIC BEHAVIOR, 2012, 75 (02) :788-808
[10]   Cooperative Control and Potential Games [J].
Marden, Jason R. ;
Arslan, Guerdal ;
Shamma, Jeff S. .
IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICS, 2009, 39 (06) :1393-1407