Power Control With QoS Guarantees: A Differentiable Projection-Based Unsupervised Learning Framework

被引:9
作者
Alizadeh, Mehrazin [1 ]
Tabassum, Hina [1 ]
机构
[1] York Univ, Dept Elect Engn & Comp Sci, Toronto, ON M3J 1P3, Canada
基金
加拿大自然科学与工程研究理事会;
关键词
Power control; learning to optimize (L2O); deep learning (DL); unsupervised learning; differentiable projection; multi-user; interference; resource allocation; RESOURCE-ALLOCATION; NEURAL-NETWORKS; DEEP;
D O I
10.1109/TCOMM.2023.3282220
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Deep neural networks (DNNs) are emerging as a potential solution to solve NP-hard wireless resource allocation problems. However, in the presence of intricate constraints, e.g., users' quality-of-service (QoS) constraints, guaranteeing constraint satisfaction becomes a fundamental challenge. In this paper, we propose a novel unsupervised learning framework to solve the classical power control problem in a multi-user interference channel, where the objective is to maximize the network sum-rate under users' minimum data rate or QoS requirements and power budget constraints. Utilizing a differentiable projection function, two novel deep learning (DL) solutions are pursued. The first is called Deep Implicit Projection Network (DIPNet), and the second is called Deep Explicit Projection Network (DEPNet). DIPNet utilizes a differentiable convex optimization layer to implicitly define a projection function. On the other hand, DEPNet uses an explicitly-defined projection function, which has an iterative nature and relies on a differentiable correction process. DIPNet requires convex constraints; whereas, the DEPNet does not require convexity and has a reduced computational complexity. To enhance the sum-rate performance of the proposed models even further, Frank-Wolfe algorithm (FW) has been applied to the output of the proposed models. Extensive simulations depict that the proposed DNN solutions not only improve the achievable data rate but also achieve zero constraint violation probability, compared to the existing DNNs. The proposed solutions outperform the classic optimization methods in terms of computation time complexity.
引用
收藏
页码:4605 / 4619
页数:15
相关论文
共 50 条
[11]   Unsupervised Learning-Inspired Power Control Methods for Energy-Efficient Wireless Networks Over Fading Channels [J].
Huang, Hao ;
Liu, Miao ;
Gui, Guan ;
Gacanin, Haris ;
Sari, Hikmet ;
Adachi, Fumiyuki .
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2022, 21 (11) :9892-9905
[12]   Utility-based power control with QoS support [J].
Stanczak, Slawomir ;
Feistel, Angela ;
Wiczanowski, Marcin ;
Boche, Holger .
WIRELESS NETWORKS, 2010, 16 (06) :1691-1705
[13]   QoS Based Power Control for Small Cell Networks [J].
Senel, Kamil ;
Akar, Mehmet .
2017 AMERICAN CONTROL CONFERENCE (ACC), 2017, :4896-4900
[14]   Distributed power control in crowdsensing-based femtocell networks with QoS provisioning [J].
Liu, Zhixin ;
Gao, Lu ;
Liu, Yang ;
Ma, Kai .
TRANSACTIONS ON EMERGING TELECOMMUNICATIONS TECHNOLOGIES, 2019, 30 (04)
[15]   QoS-based Power Control and Resource Allocation in OFDMA Femtocell Networks [J].
Hatoum, Abbas ;
Langar, Rami ;
Aitsaadi, Nadjib ;
Boutaba, Raouf ;
Pujolle, Guy .
2012 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2012, :5116-5122
[16]   Multiagent Deep-Reinforcement-Learning-Based Resource Allocation for Heterogeneous QoS Guarantees for Vehicular Networks [J].
Tian, Jie ;
Liu, Qianqian ;
Zhang, Haixia ;
Wu, Dalei .
IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (03) :1683-1695
[17]   D2D Power Control Based on Supervised and Unsupervised Learning [J].
Fan, Zhiqiang ;
Gu, Xinyu ;
Nie, Shiwen ;
Chen, Ming .
PROCEEDINGS OF 2017 3RD IEEE INTERNATIONAL CONFERENCE ON COMPUTER AND COMMUNICATIONS (ICCC), 2017, :558-563
[18]   Joint Power Control and Channel Allocation for Interference Mitigation Based on Reinforcement Learning [J].
Zhao, Guofeng ;
Li, Yong ;
Xu, Chuan ;
Han, Zhenzhen ;
Xing, Yuan ;
Yu, Shui .
IEEE ACCESS, 2019, 7 :177254-177265
[19]   Adaptive Modulation and Coding for QoS-based Femtocell Resource Allocation with Power Control [J].
Hatoum, Rima ;
Hatoum, Abbas ;
Ghaith, Alaa ;
Pujolle, Guy .
2014 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM 2014), 2014, :4543-4549
[20]   Uplink Power Control Framework Based on Reinforcement Learning for 5G Networks [J].
Costa Neto, Francisco Hugo ;
Araujo, Daniel Costa ;
Mota, Mateus Pontes ;
Maciel, Tarcisio F. ;
de Almeida, Andr L. F. .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2021, 70 (06) :5734-5748