Q-learning algorithms for constrained Markov decision processes with randomized monotone policies:: Application to MIMO transmission control

被引:59
作者
Djonin, Dejan V. [1 ]
Krishnamurthy, Vikram
机构
[1] Dyaptive Inc, Vancouver, BC V6E 4A6, Canada
[2] Univ British Columbia, Dept Elect & Comp Engn, Vancouver, BC V6T 1Z4, Canada
关键词
constrained Markov decision process (CMDP); delay constraints; monotone policies; Q learning; randomized policies; reinforcement learning; supermodularity; transmission scheduling; V-BLAST;
D O I
10.1109/TSP.2007.893228
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
This paper presents novel Q-learning based stochastic control algorithms for rate and power control in V-BLAST transmission systems. The algorithms exploit the supermodularity and monotonic structure results derived in the companion paper. Rate and power control problem is posed as a stochastic optimization problem with the goal of minimizing the average transmission power under the constraint on the average delay that can be interpreted as the quality of service requirement of a given application. Standard Q-learning algorithm is modified to handle the constraints so that it can adaptively learn structured optimal policy for unknown channel/traffic statistics. We discuss the convergence of the proposed algorithms and explore their properties in simulations. To address the issue of unknown transmission costs in an unknown time-varying environment, we propose the variant of Q-learning algorithm in which power costs are estimated in online fashion, and we show that this algorithm converges to the optimal solution as long as the power cost estimates are asymptotically unbiased.
引用
收藏
页码:2170 / 2181
页数:12
相关论文
共 26 条
  • [1] Abad FJV, 2003, 42ND IEEE CONFERENCE ON DECISION AND CONTROL, VOLS 1-6, PROCEEDINGS, P2823
  • [2] Altman E., 2003, Discrete-Event Control of Stochastic Networks: Multimodularity and Regularity
  • [3] ALTMAN E, 1999, CONSTRAINED MDPES ST
  • [4] NEURONLIKE ADAPTIVE ELEMENTS THAT CAN SOLVE DIFFICULT LEARNING CONTROL-PROBLEMS
    BARTO, AG
    SUTTON, RS
    ANDERSON, CW
    [J]. IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS, 1983, 13 (05): : 834 - 846
  • [5] Bertsekas, 1996, DYNAMIC PROGRAMMING, V2
  • [6] Bertsekas D., 1999, NONLINEAR PROGRAMMIN
  • [7] Bertsekas D. P., 1996, Neuro Dynamic Programming, V1st
  • [8] OPTIMAL POLICIES FOR CONTROLLED MARKOV-CHAINS WITH A CONSTRAINT
    BEUTLER, FJ
    ROSS, KW
    [J]. JOURNAL OF MATHEMATICAL ANALYSIS AND APPLICATIONS, 1985, 112 (01) : 236 - 252
  • [9] Boyd S., 2003, CONVEX OPTIMIZATION
  • [10] Chung KS, 2001, REV BIOL TROP, V49, P9