An autonomous agent for negotiation with multiple communication channels using parametrized deep Q-network *

被引:8
作者
Chen, Siqi [1 ]
Su, Ran [1 ]
机构
[1] Tianjin Univ, Coll Intelligence & Comp, Tianjin 300072, Peoples R China
基金
中国国家自然科学基金;
关键词
multi-agent systems; cooperative games; reinforcement learning; deep learning; human-agent interaction;
D O I
10.3934/mbe.2022371
中图分类号
Q [生物科学];
学科分类号
07 ; 0710 ; 09 ;
摘要
Agent-based negotiation aims at automating the negotiation process on behalf of humans to save time and effort. While successful, the current research considers communication between negotiation agents through offer exchange. In addition to the simple manner, many real-world settings tend to involve linguistic channels with which negotiators can express intentions, ask questions, and discuss plans. The information bandwidth of traditional negotiation is therefore restricted and grounded in the action space. Against this background, a negotiation agent called MCAN (multiple channel automated negotiation) is described that models the negotiation with multiple communication channels problem as a Markov decision problem with a hybrid action space. The agent employs a novel deep reinforcement learning technique to generate an efficient strategy, which can interact with different opponents, i.e., other negotiation agents or human players. Specifically, the agent leverages parametrized deep Q-networks (P-DQNs) that provides solutions for a hybrid discrete-continuous action space, thereby learning a comprehensive negotiation strategy that integrates linguistic communication skills and bidding strategies. The extensive experimental results show that the MCAN agent outperforms other agents as well as human players in terms of averaged utility. A high human perception evaluation is also reported based on a user study. Moreover, a comparative experiment shows how the P-DQNs algorithm promotes the performance of the MCAN agent.
引用
收藏
页码:7933 / 7951
页数:19
相关论文
共 40 条
  • [1] Aydogan R., 2017, MODERN APPROACHES AG, P153, DOI 10.1007/978-3-319-51563-2
  • [2] Evaluating practical negotiating agents: Results and analysis of the 2011 international competition
    Baarslag, Tim
    Fujita, Katsuhide
    Gerding, Enrico H.
    Hindriks, Koen
    Ito, Takayuki
    Jennings, Nicholas R.
    Jonker, Catholijn
    Kraus, Sarit
    Lin, Raz
    Robu, Valentin
    Williams, Colin R.
    [J]. ARTIFICIAL INTELLIGENCE, 2013, 198 : 73 - 103
  • [3] Bakker J, 2019, AAMAS '19: PROCEEDINGS OF THE 18TH INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS AND MULTIAGENT SYSTEMS, P260
  • [4] Cao Kris, 2018, INT C LEARNING REPRE
  • [5] Chang H. C. H., PREPRINT
  • [6] Chen S., 2013, P 23TH INT JOINT C A, P69
  • [7] Chen S., 2013, P 12 INT C AUT AG MU, P707
  • [8] Deep reinforcement learning with emergent communication for coalitional negotiation games
    Chen, Siqi
    Yang, Yang
    Su, Ran
    [J]. MATHEMATICAL BIOSCIENCES AND ENGINEERING, 2022, 19 (05) : 4592 - 4609
  • [9] Chen SQ, 2019, AAMAS '19: PROCEEDINGS OF THE 18TH INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS AND MULTIAGENT SYSTEMS, P2348
  • [10] An approach to complex agent-based negotiations via effectively modeling unknown opponents
    Chen, Siqi
    Weiss, Gerhard
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2015, 42 (05) : 2287 - 2304