Deep Reinforcement Learning for Cognitive Radar Spectrum Sharing: A Continuous Control Approach

被引:6
作者
Flandermeyer, Shane A. [1 ]
Mattingly, Rylee G. [1 ]
Metcalf, Justin G. [1 ]
机构
[1] The University of Oklahoma, Advanced Radar Research Center (ARRC), The Department of Electrical and Computer Engineering (ECE), Norman,OK,73072, United States
来源
IEEE Transactions on Radar Systems | 2024年 / 2卷
关键词
The growing demand for RF spectrum has placed considerable strain on radar systems; which must share limited spectrum resources with an ever-increasing number of devices. It is necessary to design radar systems with coexistence in mind so that the radar avoids harmful mutual interference that compromises the quality of service for other users in the channel. This work presents a deep reinforcement learning (RL) approach to spectrum sharing that enables a pulse-agile radar to operate in heavily congested spectral environments. A cognitive agent dynamically adapts the radar waveform to trade off collision avoidance; bandwidth utilization; and distortion losses due to pulse-agile behavior. Unlike existing RL approaches; this method formulates waveform parameter selection as a continuous control task; significantly increasing the flexibility of the agent and making it possible to scale its behavior to wideband; high-resolution operation. The RL agent uses a recurrent attention-based neural network to select actions; making it suitable for parallelized; real-time implementation. The proposed algorithm makes minimal assumptions about the spectral environment or other users in the spectrum; and the performance of the approach is evaluated on over-the-air data collected from a USRP X310 software-defined radio (SDR) system. Through these experiments; it is shown that the RL approach provides a flexible method for solving multi-objective waveform design problems in dynamic; high-dimensional spectrum environments. © 2023 IEEE;
D O I
10.1109/TRS.2024.3353112
中图分类号
学科分类号
摘要
引用
收藏
页码:125 / 137
相关论文
empty
未找到相关数据