Efficacious implementation of deep Q-routing in opportunistic network

被引:0
|
作者
Renu Dalal
Manju Khari
机构
[1] Guru Gobind Singh Indraprastha University,Department of Computer Science
[2] Maharaja Surajmal Institute of Technology,Information Technology Department
[3] GGSIPU,School of Computer and System Science
[4] Jawaharlal Nehru University,undefined
来源
Soft Computing | 2023年 / 27卷
关键词
Machine learning; Opportunistic network; Routing protocol; Q-learning; ONE tool;
D O I
暂无
中图分类号
学科分类号
摘要
Opportunistic network (Oppnet) is consisting of wireless nodes, which follows the Bluetooth transmission range between nodes. These kinds of networks are capable to handle long delay during transmission of messages. Dynamic network topology and unstable connection between nodes makes routing difficult in Oppnet. Overall to provide efficient and reliable transmission of messages are such a crucial task in Oppnet. To resolve these issues; this article introduces reinforcement learning based on Deep Q-learning technique. In this approach, Q-value is used to find best current intermediate node for forwarding the packet from source Oppnet node to destination Oppnet node. This protocol is simulated on ONE tool and it is found that Q-Routing in Oppnet works more efficiently as compared to conventional protocols like Epidemic, Maximum Probability (Max-Prob), and Probability Routing Protocol using History of Encounters and Transitivity (PRoPHET). Proposed protocol utilized; 71% lesser energy consumption, 66% reduced overhead ratio, 61% lesser end-to-end delay, 51% enhanced throughput, and 10% higher delivery ratio as compared to PRoPHET protocol when variation in number of nodes.
引用
收藏
页码:9459 / 9477
页数:18
相关论文
共 50 条
  • [31] An efficient low-delay routing algorithm for community-based opportunistic network
    Lin, Y. (yplin@hnu.edu.cn), 2013, Binary Information Press, P.O. Box 162, Bethel, CT 06801-0162, United States (09): : 9447 - 9456
  • [32] Design and Implementation of QoS routing Protocol for Wireless Sensor Network
    Feng Xianyang
    Yangzan
    2013 FOURTH INTERNATIONAL CONFERENCE ON DIGITAL MANUFACTURING AND AUTOMATION (ICDMA), 2013, : 428 - 431
  • [33] Efficient low-delay routing algorithm for opportunistic network based on adaptive compression of vector
    Ren, Zhi
    Liu, Yan-Wei
    Chen, Hong
    Li, Ji-Bi
    Chen, Qian-Bin
    Xi Tong Gong Cheng Yu Dian Zi Ji Shu/Systems Engineering and Electronics, 2014, 36 (02): : 368 - 375
  • [34] An Enforceable Scheme for Packet Forwarding Cooperation in Network-Coding Wireless Networks With Opportunistic Routing
    Chen, Tingting
    Zhong, Sheng
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2014, 63 (09) : 4476 - 4491
  • [35] A novel routing protocol based on grey wolf optimization and Q learning for wireless body area network
    Bedi, Pradeep
    Das, Sanjoy
    Goyal, S. B.
    Shukla, Piyush Kumar
    Mirjalili, Seyedali
    Kumar, Manoj
    EXPERT SYSTEMS WITH APPLICATIONS, 2022, 210
  • [36] Reputation-Based Opportunistic Routing Protocol Using Q-Learning for MANET Attacked by Malicious Nodes
    Ryu, Joonsu
    Kim, Sungwook
    IEEE ACCESS, 2023, 11 : 47701 - 47711
  • [37] Energy aware DBSCAN and mobility aware balanced q-learning based opportunistic routing protocol in MANET
    A. Daniel Gurwin Das
    E. Anna Devi
    Peer-to-Peer Networking and Applications, 2025, 18 (4)
  • [38] Proposal of a Deep Q-network with Profit Sharing
    Miyazaki, Kazuteru
    8TH ANNUAL INTERNATIONAL CONFERENCE ON BIOLOGICALLY INSPIRED COGNITIVE ARCHITECTURES, BICA 2017 (EIGHTH ANNUAL MEETING OF THE BICA SOCIETY), 2018, 123 : 302 - 307
  • [39] Application of Deep Q-Network in Portfolio Management
    Gao, Ziming
    Gao, Yuan
    Hu, Yi
    Jiang, Zhengyong
    Su, Jionglong
    2020 5TH IEEE INTERNATIONAL CONFERENCE ON BIG DATA ANALYTICS (IEEE ICBDA 2020), 2020, : 268 - 275
  • [40] Deep Reinforcement Learning. Case Study: Deep Q-Network
    Vrejoiu, Mihnea Horia
    ROMANIAN JOURNAL OF INFORMATION TECHNOLOGY AND AUTOMATIC CONTROL-REVISTA ROMANA DE INFORMATICA SI AUTOMATICA, 2019, 29 (03): : 65 - 78